Friday, June 8, 2018

Testing Tour Stop #12: Pair Exploring App Development with João

Recently, I've been at a lot of stops on my testing tour. Here's the summary of another one, and there are more still to come! This time, I had the honor to pair with João Proença. João was one of those people who got inspired by Toyer's and my pact as learning partners at Agile Testing Days 2017. A few months later at European Testing Conference 2018 we had several long conversations and found that perfect mixture of similarities and differences in each other where both can learn a lot from. Toyer and I are really happy that he decided to join our extended pact group along with Dianë Xhymshiti! All in all, it's a pleasure to exchange knowledge with him, so I was really happy he decided to become part of my testing tour.

Preparation? Already done!

Now, normally it was me who prepared a system under test or even a topic to pair on, as most people didn't have a clear notion of what to tackle. This time, however, I did not have to do anything besides looking forward to the session. The reason was that João managed to enable us to have a look at one of his company's products: OutSystems' development environment. How great is that?! Here's part of the email he wrote in advance of our session.
What I'll be proposing we do tomorrow is a bit of paired-up exploratory testing over the "first experience" one has with the product my company, OutSystems, offers. The fact that you know little about it is great!
I'll give you a bit more detail tomorrow when we begin, but we will most likely be using the currently available features of the OutSystems platform, but also some new unreleased ones our teams have been working on!
A couple of teams here that work on product design and UI development are very interested on the outcome of our session and have asked me if we can record it, so that they can watch it later and collect feedback.
Would you be ok with us recording our session for our internal use (a video of our interaction with the platform)?
Recording? I even discussed that idea with Cassandra on our common stop, so yes I really wanted to give it a try. And all this to help people improve their product? Of course, that's exactly what I love to do!

Getting Our Session Started

OutSystems claims to enable you to "Build Enterprise-Grade Apps Fast". On their website you can find the following definition of their product.
OutSystems is a low-code platform that lets you visually develop your entire application, easily integrate with existing systems, and add your own custom code when you need it.
Crafted by engineers with an obsessive attention to detail, every aspect of our platform is designed to help you build and deliver better apps faster.
In the beginning of our session, João explained that OutSystems' development environment, service studio, has major release cycles of a few months just like other IDEs as Microsoft's Visual Studio or Apple's Xcode. The goal was to tackle the software from a newbie perspective. Product design people were really curious about the experience a new user would have, so João asked me to really speak my mind during testing. Sure thing!

We agreed to split the testing session into the following two steps.
  1. We do the "Build a Web App in 5 min" tutorial for the current version 10.
  2. We create a new app in the upcoming service studio version 11 which is currently under test and not yet released.
João does not work on the team developing the UI, so the whole topic was also new for him. Still, of course he's an advanced user already and knows his way around the software. Therefore, we decided that I would be the navigator for step one, and for step two we can switch roles as he also did not see anything about it yet.

Then it was about time. We started recording, and we started testing.

The Joy of Exploring a New Applications

Now, I won't share the video recording, and I won't share any details about the second step testing the beta version, trying to create a web app for a real use case. What I can share, however, are the highlights of the first step, doing the tutorial of the current version to get a first feeling for the IDE. Why? Because the great thing is, you can simply try it for yourself! Personal environments are completely free and available in the cloud as long as you keep them running. For our session, João had prepared everything up to the point that we started the five minutes tutorial to develop a new web app.

One of the major topics we came across all the time during testing was usability.
  • Issues with the tutorial itself. Even recognizing the tutorial as tutorial. Same navigation button in the tutorial sometimes referring to the help, sometimes to the next step. Not seeing helper arrows pointing to the area of interest. Not displaying these arrows when viewing the tutorial steps. Not having it obvious that you don't need Microsoft Excel. Confusing ordering of explanations what to do. Missing steps to take. Constant re-positioning of the tutorial dialog; at one point even misinterpreting this behavior as the next step, triggering me to execute it twice. Having not reached 100% when finishing the tutorial.
  • Inconsistent or misleading styling. Surprisingly different styling for the tutorial, the tutorial steps, as well as any other application dialog. A new application button that looks inactive and does not draw attention. Triangles used as navigation arrows which rather look like running an app.
  • And more issues. Superfluous scrolling not providing more options. A wizard step only providing one option, so why having it at all. Doing the same thing twice leading to two different results on purpose. Viewing your new web application first in the smartphone view. Unexpected field validation in the resulting demo application.
I talked a lot about the feelings I had regarding the application. As a new user, it's very important for me that I don't get lost but find my way through the first steps easily enough, that I get aha moments of what's happening so I learn, and that I don't get annoyed or frustrated by the software. At several points, however, I got those feelings. I had to learn the tutorial's way how to lead me through first, and then could successfully complete it.

I tried to really slip into the shoes of a new user. From a technical point of view I understand how things are built and why certain decisions had been made, but from a pure user point of view I would have asked many questions, so I raised them. Especially as I understood the tutorial trying to take the user by the hand and to make it as easy as possible for them to create a first app. Furthermore, new users will always compare the system at hand to applications they already know, using those as oracles for their expectations.

In our second step, we switched roles and learned about the new version with several things revised. We found again quite interesting things which I cannot disclose here, but João took them with him to forward them to the team.

All in all, all this is of course only part of the story, from a tester perspective. Trying to find those things that can still be improved. The product looked really nice and easy and I would love to dive deeper into it!

In Retrospect

It was a really nice experience to have everything so well-prepared by João - my thanks to him for that! It was awesome to explore yet another completely unknown application. It was fun to be an actual new user as a tester and imagining the perspective of a real new user at the same time who wants to start developing an app, trying to learn what they would learn. It was great we collaborated well and our results were really productive. It's absolutely awesome we have a video recording of our session! This way we could focus on learning and providing feedback. The downside, of course, was that I now had to go through nearly two hours of video material to be able to extract the goodies for this blog. For real-life sessions I would not skip note taking as we did so I would still get that quick overview of issues, questions, or further areas to explore about. Still, video recording can be invaluable there, too. It's proof of what actually happened! Awesome to reproduce and report bugs, or even to investigate them.
Sometimes there are bits of information that are crucial but you only acknowledge that afterwards when you had to go back. ~ João
When watching the recording, I saw lots of things I did not realize during testing, even though João drove for me and I could focus on observing and navigating. Exploring a video recording provides lots of feedback as well! About the application as well as about our own procedure when testing, our thoughts and ideas. Lots to learn from.

We set off exploring with certain goals in mind, and we reached our destinations. However, we did not structure too much how to get there. This is the sort of freedom I love when exploring, with every step we learn more about potential ways ahead of us and can decide which to take. We don't have to follow the one and only route strictly but can discover so much more on our way, even if it's not in our focus.

On a meta level, I noticed that with more practice it's getting indeed easier for me to pair with other people. It still helps me if I had met them already in real life. I feel the fear to look silly is getting less so I can focus more on the actual task at hand and freely express my thoughts. Sessions were constructive from the beginning, but were mostly covered by a layer of doubt about myself to deal with on top.

There's another thing that only came to my mind when writing this blog post: it felt awesome exploring an IDE. Again. I only realized now that I have way more experience in that area than I might have told even myself. As a user of IDEs like Eclipse or IntelliJ of course. But even more importantly, as a tester. I spent my first years as a tester testing an AI middleware for computer game developers. We developed our own engine, IDE and server, and we also integrated in Unity. I did lots of exploring back then already, I even wrote those kind of beginner tutorials myself! :D And I loved this awesome domain.

Finally, the very best thing: João said he took a lot of value out of the session himself. He can now bring the issues we found to the teams. It was great to hear that they do lots of usability tests and really care about the feedback, using it to improve the product!

Saturday, June 2, 2018

Testing Tour Stop #11: Pair Exploring Test Ideas with Viki

The first time I met Viktorija Manevska was back at Agile Testing Days 2015, the very first conference I've ever attended. Before the first conference day started, I joined the lean coffee session hosted by Lisa Crispin and Janet Gregory. Both Viki and I found ourselves at Lisa's table, sitting next to each other. This is where we started to exchange experiences. Since then, we met each other each year at the same lean coffee table on the first conference day! :D Viki helped me a lot by sharing her story as first-time conference speaker and providing feedback on my own first steps towards public speaking. We had several Skype calls over the last year which proved invaluable regarding knowledge exchange, just like all my sessions with Toyer Mamoojee. I really love how I have met up with more and more other awesome testers over the last months! I signed up for a virtual coffee to get to know other testers like Milanka Matic, Amber Race or Kasia Morawska. I really enjoyed the calls I had with Mirjana Kolarov. And I loved meeting Anna Chicu finally for the first time! All those meetings were invaluable, full of knowledge sharing, story telling and having a good time together.
Now, back to Viki. She just recently moved to Germany to work for a consultancy. Having worked in product companies before, her current project provided new opportunities as well as posed new challenges for her. We decided to focus our pair testing session on one of those, on a part of testing that might not come to mind as first: generating and challenging test ideas as input for exploratory testing sessions.

A Session of a Different Kind

This time, there was no software under test - instead, we tested and challenged ideas. Test ideas that Viki had already brainstormed and provided as input for our testing. The result? We discovered more ideas. We restructured current ideas. We removed duplicates. We challenged procedure and more. All with the goal to improve and make it better, like we testers usually do with everything that gets into our hands.

While brainstorming and discussing, we talked a lot about testing itself. Have you used tours as a method to explore a product? I normally use cheat sheets heavily to generate further test ideas, identify further risks, come up with further questions about things we haven't talked about yet. Cheat sheets like the infamous test heuristics cheat sheet by Elisabeth Hendrickson, among other great resources. This way, wee started talking about exploratory testing itself. How structured should it be? Sometimes I feel my own sessions would benefit from a bit more structure, but I'm still happy with starting out to look for risks in the area of A and ending up with discovering a lot in section S. However, exploring is not unstructured and seldom goes without a clear mission or note taking. What I really love is Elisabeth Hendrickson's definition of exploratory testing, as stated in her most awesome book Explore It!.
Simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next
In my opinion, too much structure or being too strict about it removes the part of adapting your way according to what you learned. Having the product show you the way, or better multiple ways, following them to explore areas of unknowns. At times I wonder whether Maaret Pyhäjärvi was thinking of that when saying that "the software as my external imagination speaks to me"; a notion I often pondered over.

Now, the system to brainstorm test ideas for was a typical untypical web application. Something known like signing up for a user account, something unknown due to the specific domain. We started with ideas around negative testing, UI testing, smoke testing, and workflow testing. Going forward, we produced a lot more ideas, asking so many questions about the product, and also about testing itself.
  • Would you consider different browser settings like turning JavaScript off as negative testing? It could be a normal use case for certain users.
  • What about high security settings or incognito modes? Some users just care more about privacy than others, still they don't mean any harm to your application.
  • What about popup blockers or ad blockers? They are quite frequently used.
  • How to best limit the input for a name field? This made me think of Gojko Adzic's recent book Humans vs Computers as well as his famous Chrome extension Bug Magnet, a really useful tool to quickly test out valid or invalid input values.
  • Or the Big List of Naughty Strings! I've known it for a long time but haven't used that one much yet. There's definitely more to it.
  • This brought up the topic of SQL injection, a common vulnerability to check for. In general, user data must not be at risk, so let's see if we can get login information, take over user sessions, or simply can view other user's unsecured pages.
  • What about tampering with system settings like your local time, could you trick the application to provide you better offers that are based on dates?
  • You send emails? Really important to test those properly then. In one of my previous teams, formatted emails were displayed very differently using different email clients. Are the parameters of the email template filled with the correct values depending on given conditions? Are potential images attached that need to be downloaded? Are the links correct? Could the email be considered as junk and filtered out? How long does it take to receive the email?
  • What happens if you bookmark the URL while being in the middle of a process to share it with someone else?
  • What about the browser back and also forward buttons, could they lead to unexpected, inconsistent, or simply unpleasant behavior?
  • What about consistency regarding styling? Is the UI accessible, e.g. for color-blind people?
  • Which kind of browsers, browser versions, operating systems/devices, screen resolutions do we really want to support?
  • Would you have manual smoke tests? Why not automate those sanity checks instead of repeating them over and over again?
  • What about alternative paths through the application, mixing up the order in which you can do things or skipping steps - would you still reach your goal as a user? Should you be able to do so or is there a prerequisite for critical things?
  • Have you ever tried the nightmare headline game to discover what would be the worst to go wrong?
  • Oh and: although we received so many emails about it lately - have you really considered GDPR compliance?

Looking Back

Both of us loved the session! Viki said she was really satisfied with the outcome. It was great to hear a different opinion, especially from another fresh tester point of view coming from the outside. Sharing experiences, experiments and stories provided lots of value for both of us. I was really glad my input helped her further, especially regarding testing emails, SQL injection and GDPR. I personally really enjoyed to have a completely different pair testing session, exploring mind maps, challenging ideas and thoughts, discussing structures, procedures, and strategies. Exchanging information about different positions, roles, expectations, and ways of collaboration - which is so important as well when testing. Talking about exploratory testing, what we understand, what we do, how we give feedback and create transparency, how we provide value. Just as Viki said, quoted freely: "Name it like you want, but it's important to deliver value." I so much agree.

Thursday, May 24, 2018

Testing Tour Stop #10: Pair Exploring Kitchen Planning with Alex

This. Was. Fun. And lots of it! Alex Schladebeck and I went out exploring and came back with lots of potential issues and questions.
When Alex scheduled a pair testing session with me I leaped for joy. Alex impressed me a lot already with the talks she gave and how she conveyed her messages. The last time she astonished me was at European Testing Conference last February, where she did live testing on stage, talking about the testing she was doing. How awesome and courageous was that? I was fascinated. As I had met her several times already, I grasped the opportunity and challenged Alex to take it a step further and do it without knowing the system under test beforehand! Well, Alex indeed accepted the challenge and is going to give this session at Agile Testing Days end of this year. I'll be there, front row, cheering her.


First Things First

Alex asked to explore an application together, and she was happy with any proposal I'd bring up. When starting our session, I presented her the kind of applications I already identified as potential target systems. However, Alex had inspired me by accepting the mystery application testing challenge, so I also offered her a different option. We could think of any word that comes to mind, google for it, and then go for the first app we come across. Alex really liked the idea, but then decided to keep that for a future session and instead tackle the IDEA kitchen planner together! Why? Because we normally don't get to test something where 3D is involved. And as it happens, Alex just recently had to plan a kitchen herself. I'd say this qualified her as our subject matter expert!

Strong-style pairing? Sure, let's go for it! Mind mapping tools or anything else for note taking? Nope, Alex confessed she's the old-school pen and paper type of person. So we agreed to both jot down our own notes analogously.

The Fun of Testing & Talking About It

We started off from the following link, pointing us to the Great Britain version of the IKEA kitchen planner: http://www.ikea.com/gb/en/customer-service/planning-tools/kitchen-planner/ It seems when I researched for potential test applications for my pairing sessions I had googled for something like "ikea kitchen planner" and just picked one of the results. Only later when writing this blog I realized that the related pages for the other countries do not only offer quite a different layout, but also not the software we tackled. In our case, two planning applications were offered: the METOD Planner, and the 3D Kitchen Planner. We went for the METOD one as it was advertised as the new one.

The first thing we noticed was the product claim on the homepage: "Choose your style and get your quote in 1 MINUTE". This felt like an open invitation to test for! Well, we stumbled already in the beginning. It took us quite a while until we realized why the claim had been made. The tool basically asked us to provide our kitchen floor plan as input and then automatically filled up the available space with elements of the chosen kitchen style. If the 1 minute claim referred to this calculation done by the tool - well, then it's a valid claim. However, this is not what we expected. The claim raised the expectation that it only took 1 minute from choosing the style until we got the quote. But only shaping our kitchen took us way longer already, let alone choosing the kitchen design. I guess you could compare this to the dubious "5-minute meals" cookbooks! There are always way more things to do that probably take you way longer than it takes a head chef to use already prepared ingredients at their disposal.

After selecting one of the proposed kitchen styles, we were referred to a floor plan editor, asking us to adapt the displayed room shape to our actual kitchen.
  • The floor plan editor area was sized so large that it exceeded our screen space. This way we did not notice at first that the kitchen plan already had a door on the bottom wall. Therefore we wanted to add a door. A sidebar on the right offered us structural elements: a door, or a window. We thought we could drag the desired door to the floor plan, but found we had to click on it so it got added to the selected wall. Interestingly, the top wall was selected by default, so the door got added to the same place where elements for sink and cooking area were placed by default. As they overlapped, they were highlighted red to indicate that they were invalidly placed.
  • Although we were not able to drag and drop an element onto the floor plan, we could instead drag and drop it within the editor area to change its position. As expected, we could only move it along the walls, not into the middle of the room. We found we could resize the elements as well as the room walls. When reducing the size of the walls, the door did not move with them so it got displayed outside the kitchen. Well that's one option to handle that. At least, it got validated as incorrectly placed.
  • The room could only be adapted to rectangular forms; we had no wall elements or any option to shape corners or define other angles. Well, I guess that's a feature which did not make the MVP covering most of the use cases.
  • When selecting the door we added, it offered us some action buttons. One of them looked a lot like an undo button, but indeed it changed the direction in which the door opens. An undo button was dearly missed; it's an editor after all and as users we would like to be able to return to a previous state. Just these action buttons would deserve a separate session already.
  • We noticed several localization issues regarding translations. For example, structural elements offered the tooltips "DOOR" and "WINDOW" which look much like labels which had been missed to translate; especially as other tooltips showed "Sink area" and "Cooking area". We also came across several spelling errors, letting us note localization as a follow-up topic to be explored.
We noticed several areas we could dive deeper into. For now we decided to move forward and pressed a button to "design my kitchen". Our editor area changed and we now saw a 3D visualization of our kitchen. Quite some things to discover here!
  • Again, the editor area was larger than our screen size so we tried to scroll in that area; and noticed that using the mouse wheel here would zoom in and out of the 3D view. As a user I really dislike these kind of implementations which limit the scrolling area available, forcing me to scroll in the sidebar area on the right.
  • This sidebar now offered different kitchen layouts to select the outline of the kitchen furniture. We decided to go for the largest option and wanted to see if the windows we placed were handled correctly. We found an icon in the editor toolbar showing a moving camera - which turned out to be a 360° viewer. However, once clicked, the view continued to move without stopping! Quite unexpected.
  • We realized that the door for which we had changed the direction earlier was displaying the handle on the wrong side in the 3D visualization.
  • When clicking an element in the editor it got selected and the sidebar offered us related alternatives. After choosing another option, however, all related elements got changed, not only the one we selected. Weird! We'd rather have everything selected so we have a preview what gets changed and what not.
  • We thought about getting back to the floor plan. Maybe the "2D" toolbar icon would lead us there? But no, it was a 2D view of the kitchen walls. It also offered an option to view the kitchen from top - but wait, why is the countertop suddenly displayed in black when we had changed it to a white one before?
  • When exploring the different modes we noticed how the toolbar options changed. Interestingly, the toolbar did not build up from the left to the right always. When selecting top view in the 2D mode, another option vanished in the middle of the toolbar. Why had it not been placed on the right hand side if it does not apply for all modes? This expectation probably has a lot to do with the direction we're used to read in, we assumed these kind of expectations are heavily cultural.
We had noticed a few buttons offered for navigation. However, with what we saw before, we were already scared to use those buttons. Not a good sign if people would have to trust IKEA that they really deliver the kitchen as designed! It's not the cheapest product, either.
  • Starting from the 3D view, we chose to risk the browser back button. And it took us back to the homepage without a warning that our changes would get lost. Dislike!
  • We decided to give it a try and use the browser forward button now which took us back to the floor plan, not the 3D view. And most interestingly, no action buttons were offered on elements anymore, so we could not delete any elements we placed! This was the time when Alex shared that you can actually use the following as a heuristic for testing: "if you run out of paper space for your notes it's a bad sign" (freely quoted). Even weirder behavior showed when trying this later on again. Now the elements were not even displayed anymore. Also, when I now misplaced elements, went forward to design the kitchen, then confirmed the warning about the incorrect positioning, I suddenly saw the 3D view without the toolbar in the editor area!
  • We started again from the 3D view and now gave the back button offered by the application a try. This time we were instantly taken to the floor plan as expected, with all action buttons working.
  • Refreshing the page? Oh, getting back to the homepage with everything gone.
  • Home button? It also took us back to the homepage without a warning that our changes would get lost.
  • New button? Again no warning! Really? Also: this time we went to the floor plan instead of the homepage; but how to select the kitchen style then?
By navigating back and forth, we found further issues with the different editor views on second sight.
  • In the floor plan, we placed two doors on each side of a corner. Although we flipped one door to open to the outside of the room, the elements were highlighted red as incorrectly placed. We started to drag them farther away from each other, but validation only passed at an unexpected wide distance. In our eyes, they still should not interfere at all when being placed more closely. Well, with both doors pointing to the inside they might, so we switched the door back inside - and suddenly the validation passed! Flipping the door back to open to the outside again, it failed again. Fascinating.
  • In the 3D view, we discovered that the arrows offered in the toolbar moved the room exactly in the opposite way as we expected. This felt really strange!
  • Moving the camera, we noticed it was hard to get back to a centered view. We found the camera icon reset the view (side note: the icon rather looked like a screenshot icon). However, depending on the selected kitchen layout, it reset the view to a different perspective so it happened that we were looking at a different wall than when first navigating to the 3D view.
  • We wanted to delete a kitchen element in the 3D view but selecting an element only provided us with an edit icon. Clicking it, we found the remove option hid inside the edit menu! Why? After removing an element you could fill up the space again with a new element. However, this new element now did not offer the removal option anymore in the edit menu!
  • When choosing a different thing for the kitchen parts standing on the floor, the related cupboard elements were listed in the recommendations on the sidebar for easy selection. But if we first changed the cupboards, then the related floor furniture was not listed in the recommendations. Alex wondered whether staff are trained to do it this way and never the other way around so they won't notice this?
  • We found that there were actually two kind of signals to indicate an ongoing process, a progress bar and a loading circle. Why two different ones? This does not feel consistent.
Throughout our testing session we found that we often wanted to try the same things, having the same thoughts in mind. We also talked about why we wanted to try those things. Alex made a great point here. She said it got obvious that we applied our knowledge of how software is built when testing. Like that sometimes technically things are only updated in case you click outside an element. That elements could share the implementation although they should differ (like being able to delete the last window but not the last door). That it would be interesting to compare the application with the other 3D Kitchen Planner and see whether a potential re-use of implementations might have introduced issues.

We also wondered why we did not find an option to export what we planned in the METOD Planner so we could import it to the 3D Kitchen Planner for more detailed planning? We wondered whether IKEA staff used the METOD Planner themselves when consulting customers? When transferring my notes into digital form I got curious. I finally went further and after designing the kitchen chose to "save & see my quote". I had to accept a legal notice first, then select a store (Great Britain only), and then finally received the quote and a related project code. However, I couldn't copy it, interaction was disabled! I wonder why. Well, instead they offered me to download my project code; but it downloaded it as an image! So I tried to print the quote, which triggered the generation of a pdf file. Here I could finally copy the code from to store for future reference and recover my kitchen plan.

All in all, we found lots of potential issues. But the question is always: Are they relevant? Are they known but their fix would just not provide enough value? Maybe. Still, as users (okay agreed, testers) from the outside we stumbled.

Interestingly, when Maria Kedemo learned that Alex and I tackled an IKEA application, she offered to forward our findings to whom it may concern.

@Maria: Done :) Thank you!

Reflection Time

First of all: We agreed that the session was a lot of fun. We really enjoyed doing hands-on testing together! Alex shared she was once again shocked and excited at the same time when testing a productive application and seeing how many potential problems there are! This is like a litmus test for her: if on the surface there are so many problems already, then there are more problems deeper down.

Alex had the impression that our strong-style pairing was not really strong-style, but more of a discussion, talking about testing while testing. In my opinion we adhered to the driver role as being the one on the keyboard executing the navigator's intention; however we let our driver co-navigate in addition. Still it felt right and was a fluent back and forth with both of us contributing in many ways so it was absolutely fine for me. The interesting part of talking about testing were the times when we realized what we were doing, and why we were asking the questions we asked. We often wanted to try the same thing, we used oracles to decide what to expect, we used our insights how systems are built, making all this explicit.

Alex also shared she was nervous before the session as she does not get to do hands-on testing so much anymore. This is really interesting. Although I am doing lots of hands-on testing on my job, I am nervous before each and every pair testing session, even if I know my pair like in the case of Alex! Fear and uncertainty about my skills were major reasons why I decided to do the testing tour in the first place.

During our session, I felt we sometimes lost focus, we saw so many things at so many places. As Alex put it nicely: the squirrel factor. She agreed that we had many threads going but we either followed them or left them for later exploration. Well, especially for new applications this is often how you do it, you first go broad and then do a deep dive into single areas. Still, I felt I have to focus on smaller parts more, this was also a lesson from previous sessions. We both agreed that it would be great to go back on it and do another session, diving deep this time.

Also, once again, I have to get better at note taking. After our session, my notepad was a mess; and once again I would have failed Maaret's test to be able to say quickly how many issues I found, how many questions, how many future charters I discovered and why, and so on. Why does that still happen when pairing although I know it better already? Last time I even thought about recording the session. It would not have helped me presenting a quick overview, but it indeed would have helped me to recapitulate the session as my notes were quite sparse when compared with what we found.

One more point Alex brought up was that sometimes we're testers in every situation, seeing issues in processes at airports and everywhere else. I so much relate to that. I like to say that being nitpicky might not be the best quality around family and friends, but it's a great card to play while testing.

The Testing Tour Experiment

This was my tenth stop. So in fact my original experiment is completed!
  • I did 10 pair testing sessions before end of October 2018, each lasting at least 90 minutes.
  • I paired with 9 different testers from both my company's internal and the external community.
  • The topics focused on exploration and automation, as well as covering special topics like security or accessibility.
  • I published one blog post per testing session and also made this personal challenge transparent in my company.
Now, did it prove my hypothesis that pairing and mobbing with fellow testers from the community on hands-on exploratory testing and automation will result in continuously increasing skills and knowledge as well as serendipitous learning? I would indeed agree. However, I will have to have a closer and more critical look when preparing to share my lessons learned at CAST and SwanseaCon.

Still, theoretically I could stop now. But I decided I'll continue to accept sessions until end of October. Why? Because I'm still learning, I'm still contributing, so it's the right place to be and the right thing to do. Going on a testing tour worked very well for me and I recommend to give it a try.

Friday, May 18, 2018

Testing Tour Stop #9: Pair Experiencing the User Perspective with Cassandra

If you haven't happened to come across Cassandra H. Leung yet, I heavily recommend to go check her out and especially her insightful blog. I follow her for some time now and she inspired me a lot, especially as someone who took the testing community by storm and shared her experiences on conferences early on. Therefore I was really glad to hear she recently moved to Munich, and even more when she scheduled a pair testing session with me.

Prepping

For our session, Cassandra asked to focus on identifying heuristics and oracles used for testing. For our convenience I prepared some sources for heuristics to generate ideas from.
I also noted down to actively look for oracles, be it what the UI provides, the product documentation, source code we might have available, or any similar applications users might be familiar with. Also, I had a selection of potential systems under test in mind that we could chose from.

Originally, we planned to do our pair testing session on-site as we're both based in Munich. Unfortunately, life happens when you make plans, and it turned out to not be feasible for us to meet at one place so we decided to do the session remotely instead.

Personae for the win!

We started by reviewing the cheat sheet by Elisabeth Hendrickson, asking ourselves which heuristics we don't use every day. As for my part, I don't work with user personae a lot, although I would like to do it more. Cassandra agreed, so we decided to go for exploring an application from a user point of view. Now which system to test? Of the products I proposed, Cassandra chose Chewy, an online shop for pet supplies. We set up a timer to support our strong style pairing, and off we went.

As our first persona we came up with Katie, a woman in her early twenties who just got a kitten for her first time. Starting off with this basic idea, we developed the persona on the fly while exploring this e-commerce website we both never came across before. Katie doesn't have lots of money as she's still a student. She does not know yet much about cats but wants the best she can get for her new pet. Katie is impatient and doesn't like to read lots of text but rather wants to see the information she looks for quickly.

As Katie, we searched the shop for supplies she would need for her kitten as well as information helping her to decide what's needed. Just by doing so we frowned many times already. Why were so many dogs displayed when searching for cat food? Why was the video offered on a cat food product page also showing only dogs? Why was the product filter behaving this way when we know them to behave differently on other online shops like Amazon? Lots of things surprised us, some made us feel lost, and a few features turned out to be poorly implemented from user perspective. Or not accessible, like using advertisement pictures with lots of text that a screen reader would not be able to cope with. Oh and have I told you Katie is living in the UK? We noticed all prices are displayed in dollars, and there was no language selector anywhere to be seen. When signing up for a new account we noticed our UK address was indeed not accepted and we couldn't event provide a country. Well, that was it for Katie.

So we decided to switch persona. This time we slipped into the role of an old bird lady. We didn't give her a name but let's call her Berta. She had birds all her life and knows how to care for them. Though retired, money is not the biggest problem for her, neither is time. She is familiar with e-commerce websites, trying to stay up to date with what's going on in the world. She doesn't have the highest education but is definitely street-smart.

Different to Katie, Berta knows exactly what she's looking for. She has her favorite brands and looks straight for desired supplies with the intent to purchase. As Berta, one of the first things Cassandra noticed was that the main menu's food category for cats offered different types of food; but the one for birds, different type of birds. What?! Would that mean birds were the food to be consumed? Might be that these kind of categories had proven to be more successful regarding conversion, but it still felt strange to us. When going further as Berta, we raised lots of questions regarding features like "Autoship & save" allowing us to subscribe to regular deliveries - but we could only choose it for all cart items applying for it, not select different options per product. Items marked as "deal" turned out to be interesting as well. First it took us some time to find out deals meant products offered for reduced prices that are only given "today". Well, as the US cover multiple time zones we wondered when does "today" end? A question to be investigated in a separate session. Another really interesting discovery was the shipping policy. The text spoke about "the contiguous US" - but neither of us was sure that the word "contiguous" even existed. Kind of funny, especially as the very next sentence was "Talk about simple!". Yep. If even Cassandra as native speaker stumbled here, it definitely was not simple, and therefore not accessible for certain educational levels. By the way, contiguous does indeed exist.

The whole session was lots of fun. We really made an effort to imagine how the persona would think and behave, always trying to stay in the role - even though as testers we noted several things around that. Even better, the session was also really productive when it came to feedback. We found lots of issues, doubts and question marks in a short period of time. Just the mere fact that many features caused negative emotions or at least confusion was a signal we would definitely have to talk about lots of things if we would be helping testing this product.

A mental note to myself: I should really slip into the user's role more often, play through scenarios, go on their journey. It's really worth it. As a reminder, here's a video I stumbled upon which makes a point of the importance of dogfooding.

How was it?

Cassandra shared that this was her first time of real strong-style pairing which triggered some questions for her at some points: "should I..?", "can I...?" Still, she liked our collaboration and also the timer we used. In the beginning, she wasn't sure if four minutes for a rotation would be enough, but then figured that we still followed our path when switching between driver and navigator roles. We really built upon each other's ideas without abandoning them. It was not about one person trying to get as much done as possible within the four minutes as that's all you got before the next one takes over. That would have been a nightmare. So, once again, collaboration was fluent.

What Cassandra missed was the option to look behind the curtain and see what's beneath the surface. She noticed URL changes when it came to the cart which we could not explain. It would be nice to explore the reasons for this, and also learn how the content management system behind worked. With more access we also could have used a different heuristic we considered in the beginning: following the data. For me, this is valuable feedback when preparing the next pair testing session. I plan to look for practice products enabling us to go deeper, like open source applications we can run locally.

What we both liked was that we did not get stuck with functional testing of forms for example, as both of us are quite used to that. We stayed focused on our mission throughout the session.

Some troubles we faced in the beginning were of quite different nature. Cassandra just got new headphones, even quite expensive ones. During the first half-hour they just refused to continue working several times, causing us to not hear each other anymore. Only a restart helped in these cases. One lesson I learned working with people from remote is that these calls are always prone to tech issues, no matter how experienced the people involved are.

Last but not least, one thing I learned during my very first session already: I am really bad at note taking when pairing. It seems the collaboration part takes all my focus away from doing it properly. The bad news: I am still really bad at it and haven't learned it so far. I guess I need to force myself and my pair to find a sort of routine also in collaborative situations. This time again, we generated so many ideas and feedback - but I hardly noted down anything, neither did we do it together as we should have. It might even have broken our flow, but it made it really hard to sum things up afterwards. What if we had simply recorded our session in addition to a few high level notes? I guess that would have been way easier to recapitulate everything.

On a Personal Note

While my testing tour started slowly with about having one session per month in the beginning of the year, the frequency of stops increased to one per week. Four further sessions are already scheduled for the next weeks. It seems one session per week is also my personal limit. Although the testing sessions are only 90 minutes, each one takes considerable time to prepare and follow-up on. A lesson I learned already: as soon as more people read about my tour and got intrigued, I had to block my calendar more and postpone the next requests to the future this way. What a luxury situation!

Friday, May 11, 2018

Testing Tour Stop #8: Pair Accessibility Testing with Viv

On today's stop on my testing tour, I had the pleasure of pair testing with Viv Richards. I got to know him via SwanseaCon. He was the first one to accept me as newbie speaker last year and gave me the opportunity to speak again at this fabulous conference this year! I'm really glad it worked out to have him as part of my tour. It was a fun session full of insights.


Accessibility - The Neglected Child

Viv left it to me to choose a topic for our pairing session. He said he would be happy to explore any area I prefer or am comfortable with as sees himself as "jack of all trades master of none". I so relate to that! Well, I decided to go for accessibility testing this time. Why? In my opinion this is a very important topic, often overlooked or postponed. I have never had the opportunity to actively work on a product where this was a requirement, or even considered in any way. I read some things about it, but really lack practical experience. To add to that, I knew Viv had experience in this area. Back when he was still in a developer role, he worked on a product where accessibility was a big topic.

To prepare for our session, I researched some pages which would help us kick it off. As shared in the post about my last stop, I don't like to limit the scope of our sessions too much. I prefer to keep enough freedom for us to explore in any direction it might lead us; the main goal is learning. Still, I'd like to have some options prepared upfront. Here's what I found.

Hands-on Testing

For our session, I decided to not go for one of the demo pages, but rather try a productive application and see how accessibility looks like in the real world. I chose the web version of the todo application Remember The Milk. I'm not using it myself, but tried it out years ago when searching for a task management solution fitting my needs.

We started the session by imagining we had no mouse available but can only use the keyboard to navigate the application. We could successfully sign up for a new account this way, but then quickly faced problems. It was not obvious at all how fields are ordered and we most often missed visual feedback where the current focus was. Viv shared that a screen reader tool would have problems with that. But even only by just not using a mouse we failed to navigate to certain fields, like setting additional options when creating a new task. As we stumbled heavily from the beginning on, we decided to switch to simulating a different kind of user experience.

What if we were just shortsighted and didn't have an optical aid at hand? We set the browser zoom to 200%. The page looked not as nice anymore, but was still fully functional. We could reach all page areas and elements. Same when reducing the zoom to a lower value than 100%.

But what if we only had one hand available (maybe carrying a child in our arms), and that might not be our usual one? I'm right-handed, so I tried to use the mouse with my left hand while using the application. Though this was slower it worked out well. Interestingly, during this time we came across functional application behavior which we would not have expected.
  • We just wanted to add a reminder to one of our todos, but doing so the application took us to the settings page. We should first define a device to get reminded on. Hm. Okay, we chose the computer. And the app instantly opted us in for all kind of notifications. I don't like it when they sign me up for everything by default, it just leaves me with a bad feeling.
  • The settings dialog showed a save button - but inactive. Why? We found it was only meant to save any changes made on the kind of notifications we'd like to receive. Not obvious, not nice.
  • Going further, we failed to define a reminder for a specific time; only days or weeks before our due date were available. For me this would be an important feature for a task management tool. But okay.
  • Then we discovered that subtasks can be sorted by drag and drop. There was a configuration menu, but it only offered one option, drag and drop sorting, and it could not be unchecked. Really strange! Only later I found that the related help text explained that subtasks can only be sorted the same way as the original task list they belong to.
Well, the drag and drop functionality triggered the next idea. What if we could not use JavaScript? Viv shared that in his experience this was a valid case. They had to first develop without using JavaScript at all, which meant they needed a lot simpler UI. To simulate this in our case, we opened the Chrome Dev Tools and and disabled JavaScript in the settings. We learned that you need to keep the Dev Tools open to make this work. After refreshing the application page, we found it could not be loaded at all. However, it also did not provide any further feedback why. At least a notification that JavaScript needs to be enabled would be needed to not get people lost.

We decided to start using tools in general to get an overview of existing accessibility problems. Viv recommended the Chrome extension WAVE Evaluation Tool. It really presented a nice overview of the current page's accessibility issues as well as explanations why these points are considered problematic. This way we found issues like missing labels for input fields, missing alternative descriptions for images, structural issues like having h2 headers but no h1 header to start from. We found that ARIA roles, states, properties and labels were provided as expected. To my surprise, the tool also pointed out that an unordered list was used! When researching later on, I learned that incorrectly defined unordered lists are easily comprehensible as a formatting element for sighted users, but present a problem if you have to rely on a screen reader that interprets them as single paragraphs without providing an outline. WAVE also offered the option to see the page without any styles as well as to test for sufficient contrast ratio between foreground and background colors. In the case of Remember The Milk some elements did not provide sufficient contrast to fulfill the AA or AAA levels defined in WCAG 2.0.

Evaluating the contrast triggered us to consider color-blindness. There are Chrome extensions to simulate this as well. We tried Spectrum and I want to see like the colour blind, both offering different display modes for the page. We realized we didn't know the technical terms to describe different experiences when it came to colors!
What helped me to get a quick overview on these different types was the color blindness table of the color-blind npm package. All in all, Remember The Milk did quite well, only with low contrast we deemed it hard to use.

Another idea came to mind: what if we saw everything but have a hard time to digest the information? Like if we struggled with dyslexia? Chrome extensions to the rescue! Also for this case we found simulators. We tried dyslexia simulator first but couldn't get it working on our application page, also when adapting its settings. We were not sure if maybe our brains sorted everything out automatically, so we tried another Dyslexia Simulator - and instantly got closer to understanding what it means to have dyslexia and view a web page! This simulator scrambled all texts constantly, we had a hard time to focus. It took us way longer time to recognize the words, and we couldn't watch it for long.

So what about screen readers? Viv recommended the free NVDA (NonVisual Desktop Access) so went for it. I was surprised by the audio feedback we received while the application was installed and set up! Of course this only makes sense. It just showed me again how much I do not know about different kind of technology experiences. Also, I instinctively used the mouse first, and the screen reader instantly commented on everything I hovered over - until Viv told me blind people would use the keyboard not a mouse. So we tried it on Remember The Milk. The speech output was very fast and I had a hard time understanding but at least could grasp some parts. Especially I understood that the output did not provide helpful information. For example, after adding a new task I heard that "the input field is empty." So what, how should that help me? Why not providing the information that I could instantly create yet another task? Well, here it showed in practice what WAVE pointed out - the input field did have any contextual label.

As final part of our session, we went half-way through a list of tip when testing for accessibility that Viv had found. We noticed that some of the listed points were considered in our application, like not labeling links with "click here", and other points had been disregarded, like the missing h1 tag. Font size was another remark triggering the idea that although we did try to increase the browser zoom, we haven't tried to increase the font size in general on the operating system. When trying to do so, we had a hard time finding the related setting in Windows 10! Seems there is only the option to scale everything at once, text, apps, and other items. I cannot tell whether this might be a good way to handle it or not.

After our session, Viv provided me his notes and thoughts. Here's what I did not mention already.
  • W3C Accessibility
  • JAWS (Job Access With Speech) - a very good screen reader
  • Another idea: Does the page have regions defined to enable a user to quickly jump to sections of the page using a screen reader? (This would have been something with more time to test in the screen reader)

What worked well, what to improve?

In the very beginning of our session we struggled with the technical setup. No matter how often I had video calls with many different persons, this just happens and I have to remember to take it into account. At first the computer wanted to install updates. Then the call could start but the microphone was not recognized. Finally this could be solved but screen sharing failed to actually show the screen. In the end it worked and continued to work until the end of our session including sharing control.

As with several previous pairing partners, we chose to pair the strong-style way with one being the navigator and one the driver, switching roles every four minutes. Although we didn't always switch in time, this worked out pretty well. Conversation flow and collaboration was once again very smooth.

The session proved very valuable for both of us. To further improve it, Viv came up with the idea to maybe rather focus on a small part of the application. Accessibility is a huge area in itself. Spending more time on a smaller feature might have helped us. Something to keep in mind for future sessions!

Viv sees accessibility as a topic similar to security. Many times we are facing lots of technical debt in these areas. We should be mindful about it from the beginning when starting a new application. I totally agree with him. When starting a product from scratch accessibility is often neglected, and then you end up with a legacy system where it's hard to build it in afterwards.

For both of us it was interesting to pair with another tester remotely and see other people's approaches. At work, we both rather pair with developers which is really valuable but has a different outcome. Viv also pairs with other testers but rather to instruct and provide support. Not being on the same level causes different dynamics. He shared that the strong-style way of pairing would help a lot, with the back and forth you really have to contribute. Often people don't see which kind of value they can provide, like when pairing with developers to write unit tests. However, there's always something to be shared, always something to offer. Wise words.

Something to Keep in Mind

If you take one thing of this post with you, then let it be this. We all experience the world around as and technology in a different way. Most people encounter the one or the other barrier when doing so. This doesn't have to be permanent, it can also be temporary or situational. So let's keep accessibility in mind to develop valuable products.

Wednesday, May 9, 2018

Testing Tour Stop #7: Pair Penetration Testing with Peter

Continuing on my testing tour, I had the pleasure to pair with Peter Kofler. The first interesting thing was that Peter does not identify as tester, but as developer. He read about my tour and was intrigued by the approach. He had gone on coding tours himself before as well as did a lot of remote pairing sessions, but normally longer sessions on programming. We were both curious what we would learn from a common testing session of only 90 minutes.

My original experiment was designed to pair only with testers of other teams or companies. So what about having a developer this time? Well, I decided to stick with what I preach: titles are just words, and roles are just words as well. We should not let them limit ourselves but contribute where we can provide value. So from my point of view, nothing speaks against pairing with another one interested in learning more about testing.

Our Topic: Security & Penetration Testing

When writing back and forth on Twitter discussing potential topics to pair on, we decided to go with security and penetration testing in the end. A huge area of expertise which is so important and which we both wanted to learn more about.
When exchanging ideas, Peter suggested to target Dan Billing’s Ticket Magpie, an intentionally vulnerable application intended to practice penetration testing. A great choice! We went for it and simply ran it locally in a Docker container, which provided us a perfect playground we could freely explore without worrying about breaking anything.

A few days before our session, Peter came up with a quite typical question I received from several pairing partners already: "Shall / must I prepare anything for our session?" I answered as always along the following lines: "You don't have to prepare anything (though I won't hold you back), I'll prepare the basis and we'll find our way together during the session." As a result of this brainstorming discussion, Peter came up with the idea to go for the OWASP Top 10 or try a tutorial on security testing. As I learned about Burp Suite in Santhosh's security testing tutorial at Agile Testing Days 2017, but never used this tool myself, I suggested the idea to learn more about it and see how far we would get with it. All in all, we had some ideas to start with, which was good enough for me.

A Successful Learning Session

We started the session with a short personal introduction. We had only exchanged some messages on Twitter but had never seen each other in person, so we needed a common ground to start collaborating from. Afterwards I explained the high level structure of my pairing sessions and that I'd like to pair the strong-style way. Peter shared he was not the biggest fan of strong-style, but would be willing to give it a try. Only when I started my mob timer application, he realized I really meant to do strong-style with frequent rotations, and shared that he normally rather uses Pomodoros where you always have a break after a defined period of time. Interesting idea for one of my next sessions! Well, we started with a rotation of four minutes but then quickly gave up sharing remote control and stopped the timer, having me keep control and trusting our communication to balance our power dynamics. As both of us already had sufficient experience with pairing, this worked out really well for us.

We had several options to start attacking Ticket Magpie.
  1. Blackbox: Explore the application looking for vulnerabilities in order to penetrate the system.
  2. Whitebox: Check out the source code and look for vulnerabilities.
  3. Tool support: Use tools like Burp Suite to discover vulnerabilities.
We decided to start with the blackbox option and see how far we got. We said we could still move on to the other options later on.

I don't want to spoil the fun of detecting the exact Ticket Magpie vulnerabilities on your own or show an easy way how to do it. I can only tell you it was indeed a lot of fun! We started out with the mission to get user access to the system, at best as an admin. And we did! :) We considered ways how to get more information from the database, and decided we would use tooling for that. We postponed it for later.

We then focused on getting access to the actual passwords, and found it was indeed feasible. Instead of mere guessing, changing parameters and sending requests one by one, we now really wanted to benefit from tooling. We experimented with Selenium IDE to record a request to quickly repeat it (but we couldn't find a way to insert values from file input), curl (but we found we were missing the required parameters to provide), and Postman (we thought about the tool's pre-request scripting functionality but didn't try it). In the end we only ran out of time due to lack of tooling knowledge.

Throughout the session, we both did some research which often provided the next idea to try. Some of the sites we found useful were the following:

Retrospective? We Wouldn't Have Done It On Our Own

From time to time, we would stop each other from going to fast or in the wrong direction. I really appreciated Peter asking me at the end of our session whether I had felt dominated by him as he learned he sometimes tended to do so. Truth is, I have to actively hold myself back sometimes as well not to dominate the other one. In our session I did not have the impression that one dominated the other, and neither did Peter. Great. In general our collaboration was really smooth, although I was cautious about it in the beginning as we did not know each other. We didn't stay at one point too long or got stuck.

It was really cool to see the practice application Ticket Magpie. I really liked the progress we made, only in the end I had the feeling that we turned a bit in circles. However, I have to agree with Peter's remark that it was also really valuable to see which tools do not help in certain situations.

We found that both of us had taken notes during the session and both of us needed them to structure where we are and where next to go. We decided to share them in a Google document and use them as starting point for our next session - as there indeed will be a next session. We already agreed on a date for it. In that next session we'd like to improve actively pausing together from time to time during the session, do note taking together, and go from there together. We agreed to not set a fixed goal upfront, just like this time; we both felt doing so would limit our exploration. As it was clearly communicated like this in the beginning, this was fine for both of us.

All good, but the main thing is: We both thought we knew nearly nothing around security. We both found we indeed did know some things already, more than we thought. We might have been able to do the same on our own, it might have just taken more time. But although we both wanted to learn more, we just didn't do it on our own. This might be the biggest benefit of pairing: Together, we tackled the topic, learned about ways to penetrate an application and practiced it hands-on in a safe environment. What more could we want?

Thank you, Peter, for a great session. I'm already looking forward to our next one, diving deeper into security and penetration testing!

Sunday, April 29, 2018

Testing Tour Stop #6: Pair Formulating Scenarios with Pranav

My testing tour continues! This time with a novelty: For the first time, I did not know my pairing partner before our session. Pranav KS saw the blog posts about my testing tour and decided to give it a try and schedule a session with me. How awesome is that?

Preparation Phase

As I haven't met Pranav before, neither in real life nor virtually, I decided to reach out via email and explain the session idea, its structure, the tools used, and the approach I'd like to use. The topics he wanted to learn more about was exploration and automation. As we didn't know each other, I suggested to explore an application together which I would select for our session, in case he wouldn't have any other suggestion himself.

A day before our session, Pranav reached out to me and asked if we could pair on automation as this was his current learning topic where he faced some challenges, especially regarding design. Since my first pairing session with Maaret I now always prepare the subject to pair on, or at least have fallback plans. But I learned it's even better if my pairing partners bring their own topics and applications to test. So far it seems doing so even increases the value of our sessions. Therefore I thought in Pranav's case: Sure! And responded by suggesting to work on exactly the challenges he faces and build on whatever he already had.

Meeting for the First Time

We kicked our session off with a short introduction round which built a first basis to start from. When I asked Pranav if he could introduce me to the automation topic to pair on, we both found we had misunderstood each other's previously written communication! I had assumed he worked on an automation project where he faced challenges and we would use this one to pair on. I guess he had assumed I would provide a practice project to pair on automation so we could learn together how to do it well, and he could then apply the lessons to his project afterwards.

So - despite of all preparation work done upfront, suddenly none of us was prepared anymore. We had to improvise! Challenge accepted. I found out that one of the challenges Pranav faced was how to decide which scenarios to automate. Also, he was using SerenityBDD with Cucumber, just as my team. This made me think of the various demo and practice repositories for SerenityBDD. We checked them out and found several were using a sample todo application to demonstrate how you could design your automation. We decided to go for it!

Identifying and Formulating Scenarios

To decide which scenarios to write, we first had to get to know the application's features. We agreed to create a mind map to document our findings in a light way. We mapped out the features we found when interacting with the application. Features like creating, editing, completing or deleting a todo. A counter showed how many active todos were left. Bulk edit options let us complete, re-activate, or clear all todos.

In the next step, we started to formulate Cucumber scenarios. We did not have any project set up yet, so we decided to simply use a text editor in the beginning. When writing the scenarios, it became obvious again that this task sounds easy, but indeed is not. I am missing practice on formulating good scenarios myself, but I read a lot about patterns and anti-patterns, and often was reminded of what I heard in the workshop "Writing Better BDD Scenarios" by Seb Rose and Gáspár Nagy at European Testing Conference 2018. Sharing this knowledge was great, and we improved our scenarios iteratively, step by step. We discussed how to best formulate intention without already stating any implementation details. We tried to find fitting scenario titles. We thought about where to parametrize and where not, as well as where to split up scenarios into separate ones to always only test for the outcome of one action. We pondered on how to best define the "given" state to start from, what the action in the "when" would be, and how exactly to formulate our expectations in the "then" part. The resulting scenarios were far from perfect in the end, but definitely a lot better.

We had the option to either formulate only a few scenarios and start automating those already as a minimal approach from start to finish, or to formulate all scenarios we deemed most important before starting to automate them. Well, we ended up with the latter option, formulating all scenarios first. However, in the end we both thought this was the less satisfying approach as we only managed to set up our project but could not start automating within the session timebox. Still, in the last minutes I learned how to setup a Maven project from an archteype in IntelliJ. It's some time ago that I used Maven, so a refresher was welcome.

Retro Time!

Our session timebox was quickly coming to an end. For this retrospective, I shared my screen and we created a mind map of our observations and thoughts together.

The first thing mentioned: our progress was a bit slow. We found that coming up with good scenarios and finding suitable formulations is hard and takes its time. Well, there was a reason Gáspár and Seb hosted a 90 minutes workshop just about this topic, or announced a whole book about it. Still, Pranav and I agreed we maybe should have taken the other route and only formulate one to three core scenarios and then start automating them to have at least one scenario automated in the end. Well, we addressed this option during the session and decided differently, which is okay. Next time I should advocate for the "minimum viable solution" option though. Also, as the whole session was a bit unprepared and rather improvised, we felt we lacked a bit of focus or end goal. Clarifying this in the beginning might have helped us with the session scope as well.

What was really good was to have an application where none of us needed to have special domain knowledge. Most people know the one or the other todo application and what they would expect from a user point of view. We did not struggle coming up with behaviors to check for.

Working with the tools we used left us with mixed feelings. Pranav didn't find the text editor convenient for writing scenarios. He would have preferred a different tool from which we could have exported properly formatted scenarios and imported them in our test project. Personally, I did not mind as this was a a quick and easy way to get started. We agreed, however, that working with the other tools was convenient for both of us. The mind map was easy to edit, the mob timer did its job, and especially the screen sharing with granted remote control worked very smoothly. I really enjoy Zoom for their great videoconferencing quality, and being able to share screen control makes this tool even more valuable. It's free, it's easy to use, and the result was a very smooth switching between our driver and navigator roles. The only thing we noticed: As we were using my Windows system to work on and Pranav used macOS, he had troubles using keyboard shortcuts before we figured out that he had to use Windows-style shortcuts, using the Control instead of the Command key. But all in all, this was a novelty on my testing tour so far! The first virtual pairing session where we could really do strong-style pairing without impeding technical obstacles. Awesome.

Speaking of our collaboration: Pranav told me it was his first time trying strong-style pairing. He paired before, but not in this way. I had sent him Llewellyn Falco's blog post explaining the strong-style approach upfront so he could familiarize with it beforehand. Although the concept was new to him, it went very well and we fluently switched between roles. This pairing style really forces you to express your thoughts and intentions clearly! Great for learning. We only caught ourselves finishing a sentence before handing over, or sometimes taking over mouse control as navigator to point to something, but that was it. Kudos to Pranav!

What's Next?

My last posts about my testing tour seem to have fallen on fertile ground. The idea of scheduling a pair testing session spread, and now I'm happy to announce that three more sessions had already been arranged. They will all take place within the next six weeks. And more are to come! This is especially awesome as I got the opportunity to talk about my testing tour and share my lessons on both CAST and SwanseaCon this year.

My original experiment was designed to have at least ten sessions with six different testers until end of October 2018. I'm already at six sessions with five different testers, with three more sessions to come. So I'm on a good way to fulfill my own requirements. Still, I'm eager to continue my testing tour until end of October and see how much more I could learn from all those other testers out there. Are you interested to join me on my tour? Just schedule a pair testing session with me! :-)