Wednesday, July 18, 2018

Testing Tour Stop #15: Pair Evolving a Test Strategy with Toyer

The testing tour continues. Today I had the honor to pair test with Toyer Mamoojee. Since end of 2016, when we agreed on our first pact to try ourselves as conference speakers, we are having a call once every two weeks. We talk about all things testing, exchange experience, trigger thoughts, provide feedback and support. Still, we haven't ever tested a product hands-on together. You can imagine how happy I was when Toyer took the chance and scheduled a pair testing session with me on my testing tour!

What topic to pair on?

That's one of the common first questions that come up after scheduling a session. In this case, Toyer said he would love to do something from scratch, gather test ideas together, align our thinking. We know each other and our viewpoints quite well, but we haven't yet practiced testing together. He wanted to see how we really go about testing certain things, ask questions and see each other's thought processes when actually testing.

Based on his input, I thought let's tackle an application we both don't know, explore it and come up with a very first test strategy based on the gathered knowledge. As I knew that Toyer discovered mind maps for his work some time ago and learned to love them for many purposes, I thought that this could be our way to document our very first strategy draft, keeping it lightweight, easily editable, visual.

Having a list of potential applications to tackle at hand, I reached out to Toyer and asked whether he wanted to learn more details about my idea before our session, or rather not. He answered "I'm tempted to say share more information.. but I would like to be surprised too, as I want to see how I can tackle something without preparing". So he chose the latter option to which I can really relate. Over the last years I am continuously learning to tackle things without over-thinking; and I'm not done with learning this yet.

Evolving a Strategy While Exploring

At the beginning of our session I presented my topic idea to come up with a test strategy for a new product, and Toyer agreed to go for it. Lucky me, otherwise we both would have had to cope with an unprepared situation! ;-) So I provided different options as our system under test of which Toyer chose Blender, an open source 3D creation tool. I had some rare encounters with this application back at my first company when we developed an AI middleware for game developers, but had hardly touched it ever since. Toyer thought it looked really promising as we normally don't get to test these kinds of applications.

Toyer shared that first of all, he would ask which kind of need this application wants to fulfill and do related research upfront. For the limited time box of our session, however, we decided to skip this and explore ahead. Toyer accepted my suggestion to draft our strategy in a mind map, so we created it and continuously grew it according to our findings. He also agreed to do strong-style pairing while exploring, so I started up my favorite mob timer, set the rotation to four minutes, and off we went. It quickly became clear that we knew each other well. Collaboration was easy and communication fluent. We could fully focus on exploring Blender from a high level point of view, trying to grasp its purpose and main capabilities, identifying limitations and potential risks. We were actually doing a recon session, just as Elisabeth Hendrickson describes in her awesome book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing.

Throughout our session we gathered lots and lots of findings and discoveries, adding more and more important points to our test strategy.
  • Learnability. The application is not intuitive at all. It's a real expert tool. Still, everybody is a first-time user once, and even if you know the domain the user experience this product offers is not that great.
  • Functional scope. The more we explored, the more functionality we discovered. The whole tool seems really powerful, but again, is not easy to understand.
  • Screen resolution. The GUI is cluttered with many UI elements, sidebars, popups and more. On our laptop screen that was already a challenge, and it will still be one on larger screens.
  • Usability.
    • Menus, popups and tooltips looked very similar which made it hard to distinguish the purpose of each.
    • Feedback on actions was often missing or confusing.
    • Some sidebars displayed content related to views we previously visited, not getting updated with information of the current view. This way, they sometimes obscured needed information.
  • Consistency.
    • Some actions worked in one area but not in the same way in another.
    • Some sidebars were named x but the header label said y.
    • Some delete actions asked for confirmation, others just instantly deleted the item.
  • Portability. We tested Blender on macOS. The product is also offered for Windows and Linux. At several points we found strange unexpected behavior and made the assumption that it might have been due to porting issues for the macOS version. For some points, I could even confirm that assumption when writing this blog post and checking the Windows version of Blender.
  • Maintainability and reusability. The GUI offered many sidebars, popups and views that shared similar layouts and menus. We noted to investigate whether they were duplicated or re-used components.
  • Robustness. We encountered error messages on invalid input that was not caught or prevented.
  • Automatability and testability. The application offers a Python API. We found Python commands offered in tooltips, the API reference in the help menu and an integrated Python console. The console behaved differently than we knew it form other terminals, but still it was very interesting that you could automate operations; which would also increase the product's testability.
  • Discoverability and documentation. The help menu offered an operator cheat sheet; clicking on it triggered a temporary message to look at the OperatorList.txt which we could not find. Only later I learned that we had not come across the text editor where you could open named text file. What a hidden support feature. Also, we found the linked release notes page to be empty. We didn't dive deeper into the manual, but all available documentation would have to be tested as well, especially for an expert tool like this.
All in all, we made our way through many parts of the application. We made quite some assumptions. And we found we still haven't seen a lot yet. In the end, we didn't have a final test strategy, but a good starting point to be iterated over.

Time to Reflect

We covered a lot in the limited time. We gathered lots of insights, ideas, assumptions to verify. We tested a product in a domain we both don't know much about, being a desktop application instead of our usual web applications. We tried to gather information and keep a holistic view on things, not diving deep yet, focusing on uncovering the different aspects to test for such kind of a tool. All the way mapping out our world and the points to tackle in our test strategy. As we learned more, our strategy evolved. We didn't reach an end by far. If this would be our product to test, we would iterate over it while learning more.

The unknown domain had its own charm. We approached the product as a black box, not looking under the hood in the first place. We brought lots of testing knowledge, but quickly saw we lacked the domain knowledge. Toyer made an important point here: when hiring a tester for such kind of a product, it would be best to look for someone who was already exposed to these kind of tools or related areas of expertise. We could still provide lots of value and ask questions which might go unasked otherwise, but we would quickly pair up with a product person or business analyst to model the product from domain point of view. And also sit with developers to model the product's architecture.

Pairing up helped once again a lot. To see different things at the same time, by looking at different parts of the screen. To grow the mind map way faster than any of us would have done on their own. And to include different thoughts and viewpoints.

Enrich Your Experience

This was the fifteenth stop on my testing tour so far. In the beginning I had only planned ten sessions, one per month from beginning of this year until end of October. Although I already overachieved my initial goal, each further session enriched my personal experience and brought me in contact with different approaches to learn from; all that while practicing my skills hands-on. Right now, I am reflecting on my whole journey so far as I am crafting a talk about my experiences on this tour which I have the honor to give at CAST and SwanseaCon this year. And just while doing so, another tour stop was scheduled with me, further people indicated interest to pair up or listen to my lessons, and I'm having further test sessions with awesome people. I'm curious where else this will lead me. What a wonderful time.

Tuesday, July 3, 2018

Testing Tour Stop #14: Pair Evaluating a Visual Regression Testing Framework with Mirjana

Yesterday I enjoyed yet another stop on my testing tour. This time I was joined by Mirjana Andovska. In the beginning of our session she shared that she was the lone tester at her company, just as I have been for a long time. She saw my tweets about my testing tour and knew she had to try that as well, so she signed up. How awesome is that?!

When asking what she would like to pair on, Mirjana answered automation. I found out that she has a lot more experience in automation than I have, so I was really glad that she wanted to pair with me. She was especially interested in visual regression testing, mutation testing, and automated security testing. All great areas of expertise I would love to learn more about! In the end we decided to go for visual regression testing. Mirjana was so nice to even prepare playground projects so we could start our session right away and get to know the available frameworks better.

Learning While Walking Through

As Mirjana kindly took over preparation, our first step was to walk through the available projects and see what's already there and what she learned so far. She had revived an older playground project of hers using the Galen Framework, as well as created a new project to try out Gemini as visual regression testing tool. Unfortunately, it wasn't as easy as expected to get Gemini running on her computer, as it shows a dependency towards node-gyp which has a known issue on her operating system Windows 10 requiring permission to install further software.

We decided to go with the Galen playground project first and learn more about this framework before maybe trying to get Gemini running on my laptop running macOS. But first of all: why Galen and Gemini? Mirjana referred to a quite recent overview of the different frameworks available for visual regression testing. Based on her team's needs and what she read about the tools, she found that Galen and Gemini looked most interesting to check out first.

The playground project Mirjana provided was based on the getting started documentation and the first project tutorial for the Galen Framework. She had already implemented a first test for a sample website. We only had to run Selenium locally as standalone server, execute the tests and see how it was working. Just by Mirjana walking me through the playground project, we both learned more about how the framework works, while extending our knowledge on the way.

Which Framework to Chose?

The main purpose of our session was to learn more about the Galen framework and evaluate it. We wanted to discover pros and cons, as well as learn the answers to the following questions.
  • How easy is it to set it up and configure?
  • How easy is it to write a test?
  • How easy is it to maintain the tests?
  • How easily can you understand the resulting reports? Can you quickly see what is wrong so you can fix it based on this information?
With every step we found out more. Here's what we learned.
  • The spec files included CSS locators for objects as well as the test specification. We noted for later to find out whether we could also have the locators separate from the test specification.
  • Galen takes screenshots of the whole page as well as of the single elements to be located. Using images of only part of the page for comparison was pretty nice. However, when looking at the different menu items of a sample navigation bar, we found that the images were cut at different places, sometimes even cutting the menu item text. We felt this was quite strange, so we added it to our list of things to be investigated later.
  • The test report can be defined. We tried the html report, which included a heat map visualizing what the tests covered; pretty nice. However, the report captured the console output only from the framework, but not from the application itself which would make it easier to see the information needed to reproduce a bug.
  • The test run didn't close the browser in the end, so we noted that we would need to take care of this ourselves.
  • We wondered how to add more specs in one test runner file. We postponed this question for later investigation.
  • We learned that we can specify tests in JavaScript as well as in other languages like Java.
  • We saw options to test for desktop or mobile. We decided to not dive deeper here for now.
  • We found that it's really easy to run the tests not only locally but also on different servers, or on services like BrowserStack or similar.
  • We ran the tests on BrowserStack, and the tests failed due to differences. At first we assumed that the differences were due to the different operating systems Windows 7 and 10 as the Chrome version was the same. However, when looking at the compared images, we saw that the expected version showed a scrollbar where the actual image did not.
  • This led to the question how the expected comparison image was created. Maybe on first run? Maybe when running the "dump" command we found? Or did it take the expected image from the last test run?
  • We had a deeper look at the command to create a page dump.
    • The Galen documentation told us that "With a page dump you can store information about all your test objects on the page together with image samples."
    • We ran the command and waited. With time passing, we began to wonder whether it was still running or not, especially as it didn't provide any log output anymore. We decided to let it run a bit more and found that it took about three minutes.
    • We learned that the dump command, unlike the test command, did indeed close the Chrome browser after it ran through. The dumping processes generated lots of files: html, json, png, js, css files and more.
    • We discovered that the dump report gave us spec suggestions, but only if we selected two or more defined areas like the viewport and a navigation bar element. It seemed the provided suggestions should always refer to how an area related to another one.
    • Mirjana thought this would come in handy when a developer missed to align all navigation elements. I added that we could use this feature to explore the page. As the tests are quite slow in nature, we might only automate a part of them as a smoke test and explore around that manually.
    • If you'd like to have a look yourself, here's the official example page of a dump report, ready to be played with: http://galenframework.com/public/pagedump-example/page.html
  • Checking out the official documentation, we discovered the "check" command to run a single page test.
    • Here we learned that there are also options to create a TestNG, JSON or JUnit report.
    • The html report resulting from the check command showed a different structure than when executing "test", which we found interesting.
  • We still had not seen how the reference images were really created and wanted to test our assumptions. Sometimes you need to recheck the basics also at a later stage.
    • Documentation told us that the dump creates images that should be used for comparison.
    • When experimenting, we found that for "check" the stored reference images were actually used for comparison. However, when running the spec files as "test", it seemed to take the image of the last run as reference. Might this be a bug? It's always interesting to check out the reported issues when evaluating a framework.
At the end of our session, I asked Mirjana about the pros and cons she sees regarding the Galen Framework from what we learned so far.

LikedSo-soDidn't like
  • Easy to change configuration and run parameters, providing the flexibility to run the tests anywhere and in different combinations
  • Specifications for objects
  • Suggested specs feature; she shared that I opened her eyes for how to use it for exploring as well
  • Java or other options to write tests; depending on qualification or the type of project it's good to have the freedom to chose
  • Easy to run on Jenkins
  • Easy framework setup
  • Really nice documentation
  • A little bit confusing tests, maybe related to not being used to JavaScript too much
  • JavaScript files as additional layer of configuration besides the Galen config, the run command and the spec
  • Structure of the tests; we kind of tackled them but need to read more as we would also need to maintain them
  • You have to be careful with indentation when writing tests as you get an exception if it differed from expectation; especially if you use another editor for fast editing
  • Different reporting depending on the command used

All in all, Mirjana shared that from her perspective much depends on the purpose of these tests in the first place. Each test should give us an answer to a question, manual as well as automated. Sometimes it's not a yes or no. Developers might not be used to that, but reports can give us much more information than true or false, like a percentage of how much the result differs from the expectation.

As shared earlier, Mirjana had also set up a Gemini playground project. However, the test project failed to run due to missing dependencies which need to be installed what is not easy if you lack the required permissions. Also, it was hard to search and find this kind of information. She allocated the same timebox to explore both frameworks, but she didn't get as far with Gemini as she did with Galen.

In her experience, you only have a limited timebox like one day to try things out. You don't have infinite time or budget or resources. This is usually an important parameter when evaluating frameworks. However, it also depends on the skills of the one who tries them. This means, if you don't have the most knowledgeable person do it, it's a trade-off. I think that this is a puzzle you cannot solve as you only discover what kind skills, knowledge, experience you really need while you're already at it, not before. Mirjana gave the valuable advice that when you want to evaluate a tool, you need to have a familiar environment as system under test. At best it's a real project which is very simple so you can write simple tests, otherwise you will lose time inspecting the elements and learn about the application.

The Value of Learning Together

In the middle of the session Mirjana asked me: "Did you check the time? Because I'm having fun! :D" In the end, we spent 150 minutes testing together instead of the originally allocated 90 minutes.

I found the session really interesting and valuable. Mirjana had it really well prepared which I really appreciate a lot! So far I had only watched demos of visual regression testing but never saw it running for an actual product. By pairing up, I now had this opportunity and I'm thankful for it. In my experience I get way farther in less time this way than when I'm on my own.

Mirjana shared that in between her daily work she looks for ways to make her life easier. She tries to experiment, see what is new, what we can do more. Visual regression testing was one of the topics on her todo list. She had started the Galen playground project some time ago and realized by coming back to it now that she had grown a lot and learned a lot more since then. She hasn't seen the actual use of such a framework yet either, and also setting up a JavaScript project is not her everyday work. By doing it together I gained more insights about visual regression testing from her and could give more ideas back to her. Now we both have a starting point. And that's awesome.

Monday, June 18, 2018

Testing Tour Stop #13: Pair Penetration Testing with Peter

Today I stopped by Peter Kofler again on my testing tour, picking up our penetration testing endeavors from our first pairing session. We agreed to stay with Dan Billing’s Ticket Magpie as our target system. Last time we left with multiple options how to continue: use tools for automation, or try another attack like cross-site scripting, or explore the source code for vulnerabilities. Peter had come across sqlmap, an "automatic SQL injection and database takeover tool". It looked promising so we decided to give it a try and focused our pairing session on it.

We started out by getting sqlmap to work, installing the required Python version and cloning the GitHub repository. This went nice and easy, without any issues.

Next step was getting to know the tool and figuring out the many options it provides. We already learned last time that the shop's login form was vulnerable to SQL injection, so we used this as our target. From this starting point, we learned step by step what we needed to provide and what not, what single parameters do, how to see what the tool is exactly trying out.

It started out easy - and then it stopped being easy. The nice learning curve we had in the beginning started to flatten out. Whatever we tried, the tool always told us that the login form does not seem to be prone to SQL injection. But why? As humans we could see that our target provided us the information needed to identify existing user accounts. Somehow the tool did not recognize this. Or rather: we did not find out what the tool was missing to be able to recognize it.

In the end, we closed our session with the given result and mixed feelings. We were annoyed we couldn't finish successfully, it was a real pity. We still enjoyed our session, however, and learned a bunch again. It looked easy in the beginning, but then we stumbled and left the session with lots of question marks, already thinking about how to continue. The route of going deeper and investigating more might lead us to the solution, or not any further at all. We won't know before we try it.

What went really smooth was our collaboration. We used the Pomodoros technique this time, breaking down our 90 minutes session into short intervals of 25 minutes, skipping the breaks. This way we had several checkpoints to decide how to proceed, making sure we always stayed aligned. Also, we instantly applied our own pairing style we defined in our last session. I shared my screen, mostly we worked together on it, sometimes we separated our focus to research simultaneously. We used our own shared Google document from the beginning this time to take notes we both could see.

Compared to our first session, I felt that collaboration went easier as we already had worked together, got to know each other a bit and had those basic working agreements to build on. Although this second session was not as successful as the first, we both shared the opinion that pairing itself is invaluable to generate ideas what the problem could be and what to try next to solve it. Pairing challenges our own understanding, creating a shared one. Pairing is the learning time we don't want to miss.

Friday, June 8, 2018

Testing Tour Stop #12: Pair Exploring App Development with João

Recently, I've been at a lot of stops on my testing tour. Here's the summary of another one, and there are more still to come! This time, I had the honor to pair with João Proença. João was one of those people who got inspired by Toyer's and my pact as learning partners at Agile Testing Days 2017. A few months later at European Testing Conference 2018 we had several long conversations and found that perfect mixture of similarities and differences in each other where both can learn a lot from. Toyer and I are really happy that he decided to join our extended pact group along with Dianë Xhymshiti! All in all, it's a pleasure to exchange knowledge with him, so I was really happy he decided to become part of my testing tour.

Preparation? Already done!

Now, normally it was me who prepared a system under test or even a topic to pair on, as most people didn't have a clear notion of what to tackle. This time, however, I did not have to do anything besides looking forward to the session. The reason was that João managed to enable us to have a look at one of his company's products: OutSystems' development environment. How great is that?! Here's part of the email he wrote in advance of our session.
What I'll be proposing we do tomorrow is a bit of paired-up exploratory testing over the "first experience" one has with the product my company, OutSystems, offers. The fact that you know little about it is great!
I'll give you a bit more detail tomorrow when we begin, but we will most likely be using the currently available features of the OutSystems platform, but also some new unreleased ones our teams have been working on!
A couple of teams here that work on product design and UI development are very interested on the outcome of our session and have asked me if we can record it, so that they can watch it later and collect feedback.
Would you be ok with us recording our session for our internal use (a video of our interaction with the platform)?
Recording? I even discussed that idea with Cassandra on our common stop, so yes I really wanted to give it a try. And all this to help people improve their product? Of course, that's exactly what I love to do!

Getting Our Session Started

OutSystems claims to enable you to "Build Enterprise-Grade Apps Fast". On their website you can find the following definition of their product.
OutSystems is a low-code platform that lets you visually develop your entire application, easily integrate with existing systems, and add your own custom code when you need it.
Crafted by engineers with an obsessive attention to detail, every aspect of our platform is designed to help you build and deliver better apps faster.
In the beginning of our session, João explained that OutSystems' development environment, service studio, has major release cycles of a few months just like other IDEs as Microsoft's Visual Studio or Apple's Xcode. The goal was to tackle the software from a newbie perspective. Product design people were really curious about the experience a new user would have, so João asked me to really speak my mind during testing. Sure thing!

We agreed to split the testing session into the following two steps.
  1. We do the "Build a Web App in 5 min" tutorial for the current version 10.
  2. We create a new app in the upcoming service studio version 11 which is currently under test and not yet released.
João does not work on the team developing the UI, so the whole topic was also new for him. Still, of course he's an advanced user already and knows his way around the software. Therefore, we decided that I would be the navigator for step one, and for step two we can switch roles as he also did not see anything about it yet.

Then it was about time. We started recording, and we started testing.

The Joy of Exploring a New Applications

Now, I won't share the video recording, and I won't share any details about the second step testing the beta version, trying to create a web app for a real use case. What I can share, however, are the highlights of the first step, doing the tutorial of the current version to get a first feeling for the IDE. Why? Because the great thing is, you can simply try it for yourself! Personal environments are completely free and available in the cloud as long as you keep them running. For our session, João had prepared everything up to the point that we started the five minutes tutorial to develop a new web app.

One of the major topics we came across all the time during testing was usability.
  • Issues with the tutorial itself. Even recognizing the tutorial as tutorial. Same navigation button in the tutorial sometimes referring to the help, sometimes to the next step. Not seeing helper arrows pointing to the area of interest. Not displaying these arrows when viewing the tutorial steps. Not having it obvious that you don't need Microsoft Excel. Confusing ordering of explanations what to do. Missing steps to take. Constant re-positioning of the tutorial dialog; at one point even misinterpreting this behavior as the next step, triggering me to execute it twice. Having not reached 100% when finishing the tutorial.
  • Inconsistent or misleading styling. Surprisingly different styling for the tutorial, the tutorial steps, as well as any other application dialog. A new application button that looks inactive and does not draw attention. Triangles used as navigation arrows which rather look like running an app.
  • And more issues. Superfluous scrolling not providing more options. A wizard step only providing one option, so why having it at all. Doing the same thing twice leading to two different results on purpose. Viewing your new web application first in the smartphone view. Unexpected field validation in the resulting demo application.
I talked a lot about the feelings I had regarding the application. As a new user, it's very important for me that I don't get lost but find my way through the first steps easily enough, that I get aha moments of what's happening so I learn, and that I don't get annoyed or frustrated by the software. At several points, however, I got those feelings. I had to learn the tutorial's way how to lead me through first, and then could successfully complete it.

I tried to really slip into the shoes of a new user. From a technical point of view I understand how things are built and why certain decisions had been made, but from a pure user point of view I would have asked many questions, so I raised them. Especially as I understood the tutorial trying to take the user by the hand and to make it as easy as possible for them to create a first app. Furthermore, new users will always compare the system at hand to applications they already know, using those as oracles for their expectations.

In our second step, we switched roles and learned about the new version with several things revised. We found again quite interesting things which I cannot disclose here, but João took them with him to forward them to the team.

All in all, all this is of course only part of the story, from a tester perspective. Trying to find those things that can still be improved. The product looked really nice and easy and I would love to dive deeper into it!

In Retrospect

It was a really nice experience to have everything so well-prepared by João - my thanks to him for that! It was awesome to explore yet another completely unknown application. It was fun to be an actual new user as a tester and imagining the perspective of a real new user at the same time who wants to start developing an app, trying to learn what they would learn. It was great we collaborated well and our results were really productive. It's absolutely awesome we have a video recording of our session! This way we could focus on learning and providing feedback. The downside, of course, was that I now had to go through nearly two hours of video material to be able to extract the goodies for this blog. For real-life sessions I would not skip note taking as we did so I would still get that quick overview of issues, questions, or further areas to explore about. Still, video recording can be invaluable there, too. It's proof of what actually happened! Awesome to reproduce and report bugs, or even to investigate them.
Sometimes there are bits of information that are crucial but you only acknowledge that afterwards when you had to go back. ~ João
When watching the recording, I saw lots of things I did not realize during testing, even though João drove for me and I could focus on observing and navigating. Exploring a video recording provides lots of feedback as well! About the application as well as about our own procedure when testing, our thoughts and ideas. Lots to learn from.

We set off exploring with certain goals in mind, and we reached our destinations. However, we did not structure too much how to get there. This is the sort of freedom I love when exploring, with every step we learn more about potential ways ahead of us and can decide which to take. We don't have to follow the one and only route strictly but can discover so much more on our way, even if it's not in our focus.

On a meta level, I noticed that with more practice it's getting indeed easier for me to pair with other people. It still helps me if I had met them already in real life. I feel the fear to look silly is getting less so I can focus more on the actual task at hand and freely express my thoughts. Sessions were constructive from the beginning, but were mostly covered by a layer of doubt about myself to deal with on top.

There's another thing that only came to my mind when writing this blog post: it felt awesome exploring an IDE. Again. I only realized now that I have way more experience in that area than I might have told even myself. As a user of IDEs like Eclipse or IntelliJ of course. But even more importantly, as a tester. I spent my first years as a tester testing an AI middleware for computer game developers. We developed our own engine, IDE and server, and we also integrated in Unity. I did lots of exploring back then already, I even wrote those kind of beginner tutorials myself! :D And I loved this awesome domain.

Finally, the very best thing: João said he took a lot of value out of the session himself. He can now bring the issues we found to the teams. It was great to hear that they do lots of usability tests and really care about the feedback, using it to improve the product!

Saturday, June 2, 2018

Testing Tour Stop #11: Pair Exploring Test Ideas with Viki

The first time I met Viktorija Manevska was back at Agile Testing Days 2015, the very first conference I've ever attended. Before the first conference day started, I joined the lean coffee session hosted by Lisa Crispin and Janet Gregory. Both Viki and I found ourselves at Lisa's table, sitting next to each other. This is where we started to exchange experiences. Since then, we met each other each year at the same lean coffee table on the first conference day! :D Viki helped me a lot by sharing her story as first-time conference speaker and providing feedback on my own first steps towards public speaking. We had several Skype calls over the last year which proved invaluable regarding knowledge exchange, just like all my sessions with Toyer Mamoojee. I really love how I have met up with more and more other awesome testers over the last months! I signed up for a virtual coffee to get to know other testers like Milanka Matic, Amber Race or Kasia Morawska. I really enjoyed the calls I had with Mirjana Kolarov. And I loved meeting Anna Chicu finally for the first time! All those meetings were invaluable, full of knowledge sharing, story telling and having a good time together.
Now, back to Viki. She just recently moved to Germany to work for a consultancy. Having worked in product companies before, her current project provided new opportunities as well as posed new challenges for her. We decided to focus our pair testing session on one of those, on a part of testing that might not come to mind as first: generating and challenging test ideas as input for exploratory testing sessions.

A Session of a Different Kind

This time, there was no software under test - instead, we tested and challenged ideas. Test ideas that Viki had already brainstormed and provided as input for our testing. The result? We discovered more ideas. We restructured current ideas. We removed duplicates. We challenged procedure and more. All with the goal to improve and make it better, like we testers usually do with everything that gets into our hands.

While brainstorming and discussing, we talked a lot about testing itself. Have you used tours as a method to explore a product? I normally use cheat sheets heavily to generate further test ideas, identify further risks, come up with further questions about things we haven't talked about yet. Cheat sheets like the infamous test heuristics cheat sheet by Elisabeth Hendrickson, among other great resources. This way, wee started talking about exploratory testing itself. How structured should it be? Sometimes I feel my own sessions would benefit from a bit more structure, but I'm still happy with starting out to look for risks in the area of A and ending up with discovering a lot in section S. However, exploring is not unstructured and seldom goes without a clear mission or note taking. What I really love is Elisabeth Hendrickson's definition of exploratory testing, as stated in her most awesome book Explore It!.
Simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next
In my opinion, too much structure or being too strict about it removes the part of adapting your way according to what you learned. Having the product show you the way, or better multiple ways, following them to explore areas of unknowns. At times I wonder whether Maaret Pyhäjärvi was thinking of that when saying that "the software as my external imagination speaks to me"; a notion I often pondered over.

Now, the system to brainstorm test ideas for was a typical untypical web application. Something known like signing up for a user account, something unknown due to the specific domain. We started with ideas around negative testing, UI testing, smoke testing, and workflow testing. Going forward, we produced a lot more ideas, asking so many questions about the product, and also about testing itself.
  • Would you consider different browser settings like turning JavaScript off as negative testing? It could be a normal use case for certain users.
  • What about high security settings or incognito modes? Some users just care more about privacy than others, still they don't mean any harm to your application.
  • What about popup blockers or ad blockers? They are quite frequently used.
  • How to best limit the input for a name field? This made me think of Gojko Adzic's recent book Humans vs Computers as well as his famous Chrome extension Bug Magnet, a really useful tool to quickly test out valid or invalid input values.
  • Or the Big List of Naughty Strings! I've known it for a long time but haven't used that one much yet. There's definitely more to it.
  • This brought up the topic of SQL injection, a common vulnerability to check for. In general, user data must not be at risk, so let's see if we can get login information, take over user sessions, or simply can view other user's unsecured pages.
  • What about tampering with system settings like your local time, could you trick the application to provide you better offers that are based on dates?
  • You send emails? Really important to test those properly then. In one of my previous teams, formatted emails were displayed very differently using different email clients. Are the parameters of the email template filled with the correct values depending on given conditions? Are potential images attached that need to be downloaded? Are the links correct? Could the email be considered as junk and filtered out? How long does it take to receive the email?
  • What happens if you bookmark the URL while being in the middle of a process to share it with someone else?
  • What about the browser back and also forward buttons, could they lead to unexpected, inconsistent, or simply unpleasant behavior?
  • What about consistency regarding styling? Is the UI accessible, e.g. for color-blind people?
  • Which kind of browsers, browser versions, operating systems/devices, screen resolutions do we really want to support?
  • Would you have manual smoke tests? Why not automate those sanity checks instead of repeating them over and over again?
  • What about alternative paths through the application, mixing up the order in which you can do things or skipping steps - would you still reach your goal as a user? Should you be able to do so or is there a prerequisite for critical things?
  • Have you ever tried the nightmare headline game to discover what would be the worst to go wrong?
  • Oh and: although we received so many emails about it lately - have you really considered GDPR compliance?

Looking Back

Both of us loved the session! Viki said she was really satisfied with the outcome. It was great to hear a different opinion, especially from another fresh tester point of view coming from the outside. Sharing experiences, experiments and stories provided lots of value for both of us. I was really glad my input helped her further, especially regarding testing emails, SQL injection and GDPR. I personally really enjoyed to have a completely different pair testing session, exploring mind maps, challenging ideas and thoughts, discussing structures, procedures, and strategies. Exchanging information about different positions, roles, expectations, and ways of collaboration - which is so important as well when testing. Talking about exploratory testing, what we understand, what we do, how we give feedback and create transparency, how we provide value. Just as Viki said, quoted freely: "Name it like you want, but it's important to deliver value." I so much agree.

Thursday, May 24, 2018

Testing Tour Stop #10: Pair Exploring Kitchen Planning with Alex

This. Was. Fun. And lots of it! Alex Schladebeck and I went out exploring and came back with lots of potential issues and questions.
When Alex scheduled a pair testing session with me I leaped for joy. Alex impressed me a lot already with the talks she gave and how she conveyed her messages. The last time she astonished me was at European Testing Conference last February, where she did live testing on stage, talking about the testing she was doing. How awesome and courageous was that? I was fascinated. As I had met her several times already, I grasped the opportunity and challenged Alex to take it a step further and do it without knowing the system under test beforehand! Well, Alex indeed accepted the challenge and is going to give this session at Agile Testing Days end of this year. I'll be there, front row, cheering her.


First Things First

Alex asked to explore an application together, and she was happy with any proposal I'd bring up. When starting our session, I presented her the kind of applications I already identified as potential target systems. However, Alex had inspired me by accepting the mystery application testing challenge, so I also offered her a different option. We could think of any word that comes to mind, google for it, and then go for the first app we come across. Alex really liked the idea, but then decided to keep that for a future session and instead tackle the IDEA kitchen planner together! Why? Because we normally don't get to test something where 3D is involved. And as it happens, Alex just recently had to plan a kitchen herself. I'd say this qualified her as our subject matter expert!

Strong-style pairing? Sure, let's go for it! Mind mapping tools or anything else for note taking? Nope, Alex confessed she's the old-school pen and paper type of person. So we agreed to both jot down our own notes analogously.

The Fun of Testing & Talking About It

We started off from the following link, pointing us to the Great Britain version of the IKEA kitchen planner: http://www.ikea.com/gb/en/customer-service/planning-tools/kitchen-planner/ It seems when I researched for potential test applications for my pairing sessions I had googled for something like "ikea kitchen planner" and just picked one of the results. Only later when writing this blog I realized that the related pages for the other countries do not only offer quite a different layout, but also not the software we tackled. In our case, two planning applications were offered: the METOD Planner, and the 3D Kitchen Planner. We went for the METOD one as it was advertised as the new one.

The first thing we noticed was the product claim on the homepage: "Choose your style and get your quote in 1 MINUTE". This felt like an open invitation to test for! Well, we stumbled already in the beginning. It took us quite a while until we realized why the claim had been made. The tool basically asked us to provide our kitchen floor plan as input and then automatically filled up the available space with elements of the chosen kitchen style. If the 1 minute claim referred to this calculation done by the tool - well, then it's a valid claim. However, this is not what we expected. The claim raised the expectation that it only took 1 minute from choosing the style until we got the quote. But only shaping our kitchen took us way longer already, let alone choosing the kitchen design. I guess you could compare this to the dubious "5-minute meals" cookbooks! There are always way more things to do that probably take you way longer than it takes a head chef to use already prepared ingredients at their disposal.

After selecting one of the proposed kitchen styles, we were referred to a floor plan editor, asking us to adapt the displayed room shape to our actual kitchen.
  • The floor plan editor area was sized so large that it exceeded our screen space. This way we did not notice at first that the kitchen plan already had a door on the bottom wall. Therefore we wanted to add a door. A sidebar on the right offered us structural elements: a door, or a window. We thought we could drag the desired door to the floor plan, but found we had to click on it so it got added to the selected wall. Interestingly, the top wall was selected by default, so the door got added to the same place where elements for sink and cooking area were placed by default. As they overlapped, they were highlighted red to indicate that they were invalidly placed.
  • Although we were not able to drag and drop an element onto the floor plan, we could instead drag and drop it within the editor area to change its position. As expected, we could only move it along the walls, not into the middle of the room. We found we could resize the elements as well as the room walls. When reducing the size of the walls, the door did not move with them so it got displayed outside the kitchen. Well that's one option to handle that. At least, it got validated as incorrectly placed.
  • The room could only be adapted to rectangular forms; we had no wall elements or any option to shape corners or define other angles. Well, I guess that's a feature which did not make the MVP covering most of the use cases.
  • When selecting the door we added, it offered us some action buttons. One of them looked a lot like an undo button, but indeed it changed the direction in which the door opens. An undo button was dearly missed; it's an editor after all and as users we would like to be able to return to a previous state. Just these action buttons would deserve a separate session already.
  • We noticed several localization issues regarding translations. For example, structural elements offered the tooltips "DOOR" and "WINDOW" which look much like labels which had been missed to translate; especially as other tooltips showed "Sink area" and "Cooking area". We also came across several spelling errors, letting us note localization as a follow-up topic to be explored.
We noticed several areas we could dive deeper into. For now we decided to move forward and pressed a button to "design my kitchen". Our editor area changed and we now saw a 3D visualization of our kitchen. Quite some things to discover here!
  • Again, the editor area was larger than our screen size so we tried to scroll in that area; and noticed that using the mouse wheel here would zoom in and out of the 3D view. As a user I really dislike these kind of implementations which limit the scrolling area available, forcing me to scroll in the sidebar area on the right.
  • This sidebar now offered different kitchen layouts to select the outline of the kitchen furniture. We decided to go for the largest option and wanted to see if the windows we placed were handled correctly. We found an icon in the editor toolbar showing a moving camera - which turned out to be a 360° viewer. However, once clicked, the view continued to move without stopping! Quite unexpected.
  • We realized that the door for which we had changed the direction earlier was displaying the handle on the wrong side in the 3D visualization.
  • When clicking an element in the editor it got selected and the sidebar offered us related alternatives. After choosing another option, however, all related elements got changed, not only the one we selected. Weird! We'd rather have everything selected so we have a preview what gets changed and what not.
  • We thought about getting back to the floor plan. Maybe the "2D" toolbar icon would lead us there? But no, it was a 2D view of the kitchen walls. It also offered an option to view the kitchen from top - but wait, why is the countertop suddenly displayed in black when we had changed it to a white one before?
  • When exploring the different modes we noticed how the toolbar options changed. Interestingly, the toolbar did not build up from the left to the right always. When selecting top view in the 2D mode, another option vanished in the middle of the toolbar. Why had it not been placed on the right hand side if it does not apply for all modes? This expectation probably has a lot to do with the direction we're used to read in, we assumed these kind of expectations are heavily cultural.
We had noticed a few buttons offered for navigation. However, with what we saw before, we were already scared to use those buttons. Not a good sign if people would have to trust IKEA that they really deliver the kitchen as designed! It's not the cheapest product, either.
  • Starting from the 3D view, we chose to risk the browser back button. And it took us back to the homepage without a warning that our changes would get lost. Dislike!
  • We decided to give it a try and use the browser forward button now which took us back to the floor plan, not the 3D view. And most interestingly, no action buttons were offered on elements anymore, so we could not delete any elements we placed! This was the time when Alex shared that you can actually use the following as a heuristic for testing: "if you run out of paper space for your notes it's a bad sign" (freely quoted). Even weirder behavior showed when trying this later on again. Now the elements were not even displayed anymore. Also, when I now misplaced elements, went forward to design the kitchen, then confirmed the warning about the incorrect positioning, I suddenly saw the 3D view without the toolbar in the editor area!
  • We started again from the 3D view and now gave the back button offered by the application a try. This time we were instantly taken to the floor plan as expected, with all action buttons working.
  • Refreshing the page? Oh, getting back to the homepage with everything gone.
  • Home button? It also took us back to the homepage without a warning that our changes would get lost.
  • New button? Again no warning! Really? Also: this time we went to the floor plan instead of the homepage; but how to select the kitchen style then?
By navigating back and forth, we found further issues with the different editor views on second sight.
  • In the floor plan, we placed two doors on each side of a corner. Although we flipped one door to open to the outside of the room, the elements were highlighted red as incorrectly placed. We started to drag them farther away from each other, but validation only passed at an unexpected wide distance. In our eyes, they still should not interfere at all when being placed more closely. Well, with both doors pointing to the inside they might, so we switched the door back inside - and suddenly the validation passed! Flipping the door back to open to the outside again, it failed again. Fascinating.
  • In the 3D view, we discovered that the arrows offered in the toolbar moved the room exactly in the opposite way as we expected. This felt really strange!
  • Moving the camera, we noticed it was hard to get back to a centered view. We found the camera icon reset the view (side note: the icon rather looked like a screenshot icon). However, depending on the selected kitchen layout, it reset the view to a different perspective so it happened that we were looking at a different wall than when first navigating to the 3D view.
  • We wanted to delete a kitchen element in the 3D view but selecting an element only provided us with an edit icon. Clicking it, we found the remove option hid inside the edit menu! Why? After removing an element you could fill up the space again with a new element. However, this new element now did not offer the removal option anymore in the edit menu!
  • When choosing a different thing for the kitchen parts standing on the floor, the related cupboard elements were listed in the recommendations on the sidebar for easy selection. But if we first changed the cupboards, then the related floor furniture was not listed in the recommendations. Alex wondered whether staff are trained to do it this way and never the other way around so they won't notice this?
  • We found that there were actually two kind of signals to indicate an ongoing process, a progress bar and a loading circle. Why two different ones? This does not feel consistent.
Throughout our testing session we found that we often wanted to try the same things, having the same thoughts in mind. We also talked about why we wanted to try those things. Alex made a great point here. She said it got obvious that we applied our knowledge of how software is built when testing. Like that sometimes technically things are only updated in case you click outside an element. That elements could share the implementation although they should differ (like being able to delete the last window but not the last door). That it would be interesting to compare the application with the other 3D Kitchen Planner and see whether a potential re-use of implementations might have introduced issues.

We also wondered why we did not find an option to export what we planned in the METOD Planner so we could import it to the 3D Kitchen Planner for more detailed planning? We wondered whether IKEA staff used the METOD Planner themselves when consulting customers? When transferring my notes into digital form I got curious. I finally went further and after designing the kitchen chose to "save & see my quote". I had to accept a legal notice first, then select a store (Great Britain only), and then finally received the quote and a related project code. However, I couldn't copy it, interaction was disabled! I wonder why. Well, instead they offered me to download my project code; but it downloaded it as an image! So I tried to print the quote, which triggered the generation of a pdf file. Here I could finally copy the code from to store for future reference and recover my kitchen plan.

All in all, we found lots of potential issues. But the question is always: Are they relevant? Are they known but their fix would just not provide enough value? Maybe. Still, as users (okay agreed, testers) from the outside we stumbled.

Interestingly, when Maria Kedemo learned that Alex and I tackled an IKEA application, she offered to forward our findings to whom it may concern.

@Maria: Done :) Thank you!

Reflection Time

First of all: We agreed that the session was a lot of fun. We really enjoyed doing hands-on testing together! Alex shared she was once again shocked and excited at the same time when testing a productive application and seeing how many potential problems there are! This is like a litmus test for her: if on the surface there are so many problems already, then there are more problems deeper down.

Alex had the impression that our strong-style pairing was not really strong-style, but more of a discussion, talking about testing while testing. In my opinion we adhered to the driver role as being the one on the keyboard executing the navigator's intention; however we let our driver co-navigate in addition. Still it felt right and was a fluent back and forth with both of us contributing in many ways so it was absolutely fine for me. The interesting part of talking about testing were the times when we realized what we were doing, and why we were asking the questions we asked. We often wanted to try the same thing, we used oracles to decide what to expect, we used our insights how systems are built, making all this explicit.

Alex also shared she was nervous before the session as she does not get to do hands-on testing so much anymore. This is really interesting. Although I am doing lots of hands-on testing on my job, I am nervous before each and every pair testing session, even if I know my pair like in the case of Alex! Fear and uncertainty about my skills were major reasons why I decided to do the testing tour in the first place.

During our session, I felt we sometimes lost focus, we saw so many things at so many places. As Alex put it nicely: the squirrel factor. She agreed that we had many threads going but we either followed them or left them for later exploration. Well, especially for new applications this is often how you do it, you first go broad and then do a deep dive into single areas. Still, I felt I have to focus on smaller parts more, this was also a lesson from previous sessions. We both agreed that it would be great to go back on it and do another session, diving deep this time.

Also, once again, I have to get better at note taking. After our session, my notepad was a mess; and once again I would have failed Maaret's test to be able to say quickly how many issues I found, how many questions, how many future charters I discovered and why, and so on. Why does that still happen when pairing although I know it better already? Last time I even thought about recording the session. It would not have helped me presenting a quick overview, but it indeed would have helped me to recapitulate the session as my notes were quite sparse when compared with what we found.

One more point Alex brought up was that sometimes we're testers in every situation, seeing issues in processes at airports and everywhere else. I so much relate to that. I like to say that being nitpicky might not be the best quality around family and friends, but it's a great card to play while testing.

The Testing Tour Experiment

This was my tenth stop. So in fact my original experiment is completed!
  • I did 10 pair testing sessions before end of October 2018, each lasting at least 90 minutes.
  • I paired with 9 different testers from both my company's internal and the external community.
  • The topics focused on exploration and automation, as well as covering special topics like security or accessibility.
  • I published one blog post per testing session and also made this personal challenge transparent in my company.
Now, did it prove my hypothesis that pairing and mobbing with fellow testers from the community on hands-on exploratory testing and automation will result in continuously increasing skills and knowledge as well as serendipitous learning? I would indeed agree. However, I will have to have a closer and more critical look when preparing to share my lessons learned at CAST and SwanseaCon.

Still, theoretically I could stop now. But I decided I'll continue to accept sessions until end of October. Why? Because I'm still learning, I'm still contributing, so it's the right place to be and the right thing to do. Going on a testing tour worked very well for me and I recommend to give it a try.

Friday, May 18, 2018

Testing Tour Stop #9: Pair Experiencing the User Perspective with Cassandra

If you haven't happened to come across Cassandra H. Leung yet, I heavily recommend to go check her out and especially her insightful blog. I follow her for some time now and she inspired me a lot, especially as someone who took the testing community by storm and shared her experiences on conferences early on. Therefore I was really glad to hear she recently moved to Munich, and even more when she scheduled a pair testing session with me.

Prepping

For our session, Cassandra asked to focus on identifying heuristics and oracles used for testing. For our convenience I prepared some sources for heuristics to generate ideas from.
I also noted down to actively look for oracles, be it what the UI provides, the product documentation, source code we might have available, or any similar applications users might be familiar with. Also, I had a selection of potential systems under test in mind that we could chose from.

Originally, we planned to do our pair testing session on-site as we're both based in Munich. Unfortunately, life happens when you make plans, and it turned out to not be feasible for us to meet at one place so we decided to do the session remotely instead.

Personae for the win!

We started by reviewing the cheat sheet by Elisabeth Hendrickson, asking ourselves which heuristics we don't use every day. As for my part, I don't work with user personae a lot, although I would like to do it more. Cassandra agreed, so we decided to go for exploring an application from a user point of view. Now which system to test? Of the products I proposed, Cassandra chose Chewy, an online shop for pet supplies. We set up a timer to support our strong style pairing, and off we went.

As our first persona we came up with Katie, a woman in her early twenties who just got a kitten for her first time. Starting off with this basic idea, we developed the persona on the fly while exploring this e-commerce website we both never came across before. Katie doesn't have lots of money as she's still a student. She does not know yet much about cats but wants the best she can get for her new pet. Katie is impatient and doesn't like to read lots of text but rather wants to see the information she looks for quickly.

As Katie, we searched the shop for supplies she would need for her kitten as well as information helping her to decide what's needed. Just by doing so we frowned many times already. Why were so many dogs displayed when searching for cat food? Why was the video offered on a cat food product page also showing only dogs? Why was the product filter behaving this way when we know them to behave differently on other online shops like Amazon? Lots of things surprised us, some made us feel lost, and a few features turned out to be poorly implemented from user perspective. Or not accessible, like using advertisement pictures with lots of text that a screen reader would not be able to cope with. Oh and have I told you Katie is living in the UK? We noticed all prices are displayed in dollars, and there was no language selector anywhere to be seen. When signing up for a new account we noticed our UK address was indeed not accepted and we couldn't event provide a country. Well, that was it for Katie.

So we decided to switch persona. This time we slipped into the role of an old bird lady. We didn't give her a name but let's call her Berta. She had birds all her life and knows how to care for them. Though retired, money is not the biggest problem for her, neither is time. She is familiar with e-commerce websites, trying to stay up to date with what's going on in the world. She doesn't have the highest education but is definitely street-smart.

Different to Katie, Berta knows exactly what she's looking for. She has her favorite brands and looks straight for desired supplies with the intent to purchase. As Berta, one of the first things Cassandra noticed was that the main menu's food category for cats offered different types of food; but the one for birds, different type of birds. What?! Would that mean birds were the food to be consumed? Might be that these kind of categories had proven to be more successful regarding conversion, but it still felt strange to us. When going further as Berta, we raised lots of questions regarding features like "Autoship & save" allowing us to subscribe to regular deliveries - but we could only choose it for all cart items applying for it, not select different options per product. Items marked as "deal" turned out to be interesting as well. First it took us some time to find out deals meant products offered for reduced prices that are only given "today". Well, as the US cover multiple time zones we wondered when does "today" end? A question to be investigated in a separate session. Another really interesting discovery was the shipping policy. The text spoke about "the contiguous US" - but neither of us was sure that the word "contiguous" even existed. Kind of funny, especially as the very next sentence was "Talk about simple!". Yep. If even Cassandra as native speaker stumbled here, it definitely was not simple, and therefore not accessible for certain educational levels. By the way, contiguous does indeed exist.

The whole session was lots of fun. We really made an effort to imagine how the persona would think and behave, always trying to stay in the role - even though as testers we noted several things around that. Even better, the session was also really productive when it came to feedback. We found lots of issues, doubts and question marks in a short period of time. Just the mere fact that many features caused negative emotions or at least confusion was a signal we would definitely have to talk about lots of things if we would be helping testing this product.

A mental note to myself: I should really slip into the user's role more often, play through scenarios, go on their journey. It's really worth it. As a reminder, here's a video I stumbled upon which makes a point of the importance of dogfooding.

How was it?

Cassandra shared that this was her first time of real strong-style pairing which triggered some questions for her at some points: "should I..?", "can I...?" Still, she liked our collaboration and also the timer we used. In the beginning, she wasn't sure if four minutes for a rotation would be enough, but then figured that we still followed our path when switching between driver and navigator roles. We really built upon each other's ideas without abandoning them. It was not about one person trying to get as much done as possible within the four minutes as that's all you got before the next one takes over. That would have been a nightmare. So, once again, collaboration was fluent.

What Cassandra missed was the option to look behind the curtain and see what's beneath the surface. She noticed URL changes when it came to the cart which we could not explain. It would be nice to explore the reasons for this, and also learn how the content management system behind worked. With more access we also could have used a different heuristic we considered in the beginning: following the data. For me, this is valuable feedback when preparing the next pair testing session. I plan to look for practice products enabling us to go deeper, like open source applications we can run locally.

What we both liked was that we did not get stuck with functional testing of forms for example, as both of us are quite used to that. We stayed focused on our mission throughout the session.

Some troubles we faced in the beginning were of quite different nature. Cassandra just got new headphones, even quite expensive ones. During the first half-hour they just refused to continue working several times, causing us to not hear each other anymore. Only a restart helped in these cases. One lesson I learned working with people from remote is that these calls are always prone to tech issues, no matter how experienced the people involved are.

Last but not least, one thing I learned during my very first session already: I am really bad at note taking when pairing. It seems the collaboration part takes all my focus away from doing it properly. The bad news: I am still really bad at it and haven't learned it so far. I guess I need to force myself and my pair to find a sort of routine also in collaborative situations. This time again, we generated so many ideas and feedback - but I hardly noted down anything, neither did we do it together as we should have. It might even have broken our flow, but it made it really hard to sum things up afterwards. What if we had simply recorded our session in addition to a few high level notes? I guess that would have been way easier to recapitulate everything.

On a Personal Note

While my testing tour started slowly with about having one session per month in the beginning of the year, the frequency of stops increased to one per week. Four further sessions are already scheduled for the next weeks. It seems one session per week is also my personal limit. Although the testing sessions are only 90 minutes, each one takes considerable time to prepare and follow-up on. A lesson I learned already: as soon as more people read about my tour and got intrigued, I had to block my calendar more and postpone the next requests to the future this way. What a luxury situation!