Thursday, May 24, 2018

Testing Tour Stop #10: Pair Exploring Kitchen Planning with Alex

This. Was. Fun. And lots of it! Alex Schladebeck and I went out exploring and came back with lots of potential issues and questions.
When Alex scheduled a pair testing session with me I leaped for joy. Alex impressed me a lot already with the talks she gave and how she conveyed her messages. The last time she astonished me was at European Testing Conference last February, where she did live testing on stage, talking about the testing she was doing. How awesome and courageous was that? I was fascinated. As I had met her several times already, I grasped the opportunity and challenged Alex to take it a step further and do it without knowing the system under test beforehand! Well, Alex indeed accepted the challenge and is going to give this session at Agile Testing Days end of this year. I'll be there, front row, cheering her.


First Things First

Alex asked to explore an application together, and she was happy with any proposal I'd bring up. When starting our session, I presented her the kind of applications I already identified as potential target systems. However, Alex had inspired me by accepting the mystery application testing challenge, so I also offered her a different option. We could think of any word that comes to mind, google for it, and then go for the first app we come across. Alex really liked the idea, but then decided to keep that for a future session and instead tackle the IDEA kitchen planner together! Why? Because we normally don't get to test something where 3D is involved. And as it happens, Alex just recently had to plan a kitchen herself. I'd say this qualified her as our subject matter expert!

Strong-style pairing? Sure, let's go for it! Mind mapping tools or anything else for note taking? Nope, Alex confessed she's the old-school pen and paper type of person. So we agreed to both jot down our own notes analogously.

The Fun of Testing & Talking About It

We started off from the following link, pointing us to the Great Britain version of the IKEA kitchen planner: http://www.ikea.com/gb/en/customer-service/planning-tools/kitchen-planner/ It seems when I researched for potential test applications for my pairing sessions I had googled for something like "ikea kitchen planner" and just picked one of the results. Only later when writing this blog I realized that the related pages for the other countries do not only offer quite a different layout, but also not the software we tackled. In our case, two planning applications were offered: the METOD Planner, and the 3D Kitchen Planner. We went for the METOD one as it was advertised as the new one.

The first thing we noticed was the product claim on the homepage: "Choose your style and get your quote in 1 MINUTE". This felt like an open invitation to test for! Well, we stumbled already in the beginning. It took us quite a while until we realized why the claim had been made. The tool basically asked us to provide our kitchen floor plan as input and then automatically filled up the available space with elements of the chosen kitchen style. If the 1 minute claim referred to this calculation done by the tool - well, then it's a valid claim. However, this is not what we expected. The claim raised the expectation that it only took 1 minute from choosing the style until we got the quote. But only shaping our kitchen took us way longer already, let alone choosing the kitchen design. I guess you could compare this to the dubious "5-minute meals" cookbooks! There are always way more things to do that probably take you way longer than it takes a head chef to use already prepared ingredients at their disposal.

After selecting one of the proposed kitchen styles, we were referred to a floor plan editor, asking us to adapt the displayed room shape to our actual kitchen.
  • The floor plan editor area was sized so large that it exceeded our screen space. This way we did not notice at first that the kitchen plan already had a door on the bottom wall. Therefore we wanted to add a door. A sidebar on the right offered us structural elements: a door, or a window. We thought we could drag the desired door to the floor plan, but found we had to click on it so it got added to the selected wall. Interestingly, the top wall was selected by default, so the door got added to the same place where elements for sink and cooking area were placed by default. As they overlapped, they were highlighted red to indicate that they were invalidly placed.
  • Although we were not able to drag and drop an element onto the floor plan, we could instead drag and drop it within the editor area to change its position. As expected, we could only move it along the walls, not into the middle of the room. We found we could resize the elements as well as the room walls. When reducing the size of the walls, the door did not move with them so it got displayed outside the kitchen. Well that's one option to handle that. At least, it got validated as incorrectly placed.
  • The room could only be adapted to rectangular forms; we had no wall elements or any option to shape corners or define other angles. Well, I guess that's a feature which did not make the MVP covering most of the use cases.
  • When selecting the door we added, it offered us some action buttons. One of them looked a lot like an undo button, but indeed it changed the direction in which the door opens. An undo button was dearly missed; it's an editor after all and as users we would like to be able to return to a previous state. Just these action buttons would deserve a separate session already.
  • We noticed several localization issues regarding translations. For example, structural elements offered the tooltips "DOOR" and "WINDOW" which look much like labels which had been missed to translate; especially as other tooltips showed "Sink area" and "Cooking area". We also came across several spelling errors, letting us note localization as a follow-up topic to be explored.
We noticed several areas we could dive deeper into. For now we decided to move forward and pressed a button to "design my kitchen". Our editor area changed and we now saw a 3D visualization of our kitchen. Quite some things to discover here!
  • Again, the editor area was larger than our screen size so we tried to scroll in that area; and noticed that using the mouse wheel here would zoom in and out of the 3D view. As a user I really dislike these kind of implementations which limit the scrolling area available, forcing me to scroll in the sidebar area on the right.
  • This sidebar now offered different kitchen layouts to select the outline of the kitchen furniture. We decided to go for the largest option and wanted to see if the windows we placed were handled correctly. We found an icon in the editor toolbar showing a moving camera - which turned out to be a 360° viewer. However, once clicked, the view continued to move without stopping! Quite unexpected.
  • We realized that the door for which we had changed the direction earlier was displaying the handle on the wrong side in the 3D visualization.
  • When clicking an element in the editor it got selected and the sidebar offered us related alternatives. After choosing another option, however, all related elements got changed, not only the one we selected. Weird! We'd rather have everything selected so we have a preview what gets changed and what not.
  • We thought about getting back to the floor plan. Maybe the "2D" toolbar icon would lead us there? But no, it was a 2D view of the kitchen walls. It also offered an option to view the kitchen from top - but wait, why is the countertop suddenly displayed in black when we had changed it to a white one before?
  • When exploring the different modes we noticed how the toolbar options changed. Interestingly, the toolbar did not build up from the left to the right always. When selecting top view in the 2D mode, another option vanished in the middle of the toolbar. Why had it not been placed on the right hand side if it does not apply for all modes? This expectation probably has a lot to do with the direction we're used to read in, we assumed these kind of expectations are heavily cultural.
We had noticed a few buttons offered for navigation. However, with what we saw before, we were already scared to use those buttons. Not a good sign if people would have to trust IKEA that they really deliver the kitchen as designed! It's not the cheapest product, either.
  • Starting from the 3D view, we chose to risk the browser back button. And it took us back to the homepage without a warning that our changes would get lost. Dislike!
  • We decided to give it a try and use the browser forward button now which took us back to the floor plan, not the 3D view. And most interestingly, no action buttons were offered on elements anymore, so we could not delete any elements we placed! This was the time when Alex shared that you can actually use the following as a heuristic for testing: "if you run out of paper space for your notes it's a bad sign" (freely quoted). Even weirder behavior showed when trying this later on again. Now the elements were not even displayed anymore. Also, when I now misplaced elements, went forward to design the kitchen, then confirmed the warning about the incorrect positioning, I suddenly saw the 3D view without the toolbar in the editor area!
  • We started again from the 3D view and now gave the back button offered by the application a try. This time we were instantly taken to the floor plan as expected, with all action buttons working.
  • Refreshing the page? Oh, getting back to the homepage with everything gone.
  • Home button? It also took us back to the homepage without a warning that our changes would get lost.
  • New button? Again no warning! Really? Also: this time we went to the floor plan instead of the homepage; but how to select the kitchen style then?
By navigating back and forth, we found further issues with the different editor views on second sight.
  • In the floor plan, we placed two doors on each side of a corner. Although we flipped one door to open to the outside of the room, the elements were highlighted red as incorrectly placed. We started to drag them farther away from each other, but validation only passed at an unexpected wide distance. In our eyes, they still should not interfere at all when being placed more closely. Well, with both doors pointing to the inside they might, so we switched the door back inside - and suddenly the validation passed! Flipping the door back to open to the outside again, it failed again. Fascinating.
  • In the 3D view, we discovered that the arrows offered in the toolbar moved the room exactly in the opposite way as we expected. This felt really strange!
  • Moving the camera, we noticed it was hard to get back to a centered view. We found the camera icon reset the view (side note: the icon rather looked like a screenshot icon). However, depending on the selected kitchen layout, it reset the view to a different perspective so it happened that we were looking at a different wall than when first navigating to the 3D view.
  • We wanted to delete a kitchen element in the 3D view but selecting an element only provided us with an edit icon. Clicking it, we found the remove option hid inside the edit menu! Why? After removing an element you could fill up the space again with a new element. However, this new element now did not offer the removal option anymore in the edit menu!
  • When choosing a different thing for the kitchen parts standing on the floor, the related cupboard elements were listed in the recommendations on the sidebar for easy selection. But if we first changed the cupboards, then the related floor furniture was not listed in the recommendations. Alex wondered whether staff are trained to do it this way and never the other way around so they won't notice this?
  • We found that there were actually two kind of signals to indicate an ongoing process, a progress bar and a loading circle. Why two different ones? This does not feel consistent.
Throughout our testing session we found that we often wanted to try the same things, having the same thoughts in mind. We also talked about why we wanted to try those things. Alex made a great point here. She said it got obvious that we applied our knowledge of how software is built when testing. Like that sometimes technically things are only updated in case you click outside an element. That elements could share the implementation although they should differ (like being able to delete the last window but not the last door). That it would be interesting to compare the application with the other 3D Kitchen Planner and see whether a potential re-use of implementations might have introduced issues.

We also wondered why we did not find an option to export what we planned in the METOD Planner so we could import it to the 3D Kitchen Planner for more detailed planning? We wondered whether IKEA staff used the METOD Planner themselves when consulting customers? When transferring my notes into digital form I got curious. I finally went further and after designing the kitchen chose to "save & see my quote". I had to accept a legal notice first, then select a store (Great Britain only), and then finally received the quote and a related project code. However, I couldn't copy it, interaction was disabled! I wonder why. Well, instead they offered me to download my project code; but it downloaded it as an image! So I tried to print the quote, which triggered the generation of a pdf file. Here I could finally copy the code from to store for future reference and recover my kitchen plan.

All in all, we found lots of potential issues. But the question is always: Are they relevant? Are they known but their fix would just not provide enough value? Maybe. Still, as users (okay agreed, testers) from the outside we stumbled.

Interestingly, when Maria Kedemo learned that Alex and I tackled an IKEA application, she offered to forward our findings to whom it may concern.

@Maria: Done :) Thank you!

Reflection Time

First of all: We agreed that the session was a lot of fun. We really enjoyed doing hands-on testing together! Alex shared she was once again shocked and excited at the same time when testing a productive application and seeing how many potential problems there are! This is like a litmus test for her: if on the surface there are so many problems already, then there are more problems deeper down.

Alex had the impression that our strong-style pairing was not really strong-style, but more of a discussion, talking about testing while testing. In my opinion we adhered to the driver role as being the one on the keyboard executing the navigator's intention; however we let our driver co-navigate in addition. Still it felt right and was a fluent back and forth with both of us contributing in many ways so it was absolutely fine for me. The interesting part of talking about testing were the times when we realized what we were doing, and why we were asking the questions we asked. We often wanted to try the same thing, we used oracles to decide what to expect, we used our insights how systems are built, making all this explicit.

Alex also shared she was nervous before the session as she does not get to do hands-on testing so much anymore. This is really interesting. Although I am doing lots of hands-on testing on my job, I am nervous before each and every pair testing session, even if I know my pair like in the case of Alex! Fear and uncertainty about my skills were major reasons why I decided to do the testing tour in the first place.

During our session, I felt we sometimes lost focus, we saw so many things at so many places. As Alex put it nicely: the squirrel factor. She agreed that we had many threads going but we either followed them or left them for later exploration. Well, especially for new applications this is often how you do it, you first go broad and then do a deep dive into single areas. Still, I felt I have to focus on smaller parts more, this was also a lesson from previous sessions. We both agreed that it would be great to go back on it and do another session, diving deep this time.

Also, once again, I have to get better at note taking. After our session, my notepad was a mess; and once again I would have failed Maaret's test to be able to say quickly how many issues I found, how many questions, how many future charters I discovered and why, and so on. Why does that still happen when pairing although I know it better already? Last time I even thought about recording the session. It would not have helped me presenting a quick overview, but it indeed would have helped me to recapitulate the session as my notes were quite sparse when compared with what we found.

One more point Alex brought up was that sometimes we're testers in every situation, seeing issues in processes at airports and everywhere else. I so much relate to that. I like to say that being nitpicky might not be the best quality around family and friends, but it's a great card to play while testing.

The Testing Tour Experiment

This was my tenth stop. So in fact my original experiment is completed!
  • I did 10 pair testing sessions before end of October 2018, each lasting at least 90 minutes.
  • I paired with 9 different testers from both my company's internal and the external community.
  • The topics focused on exploration and automation, as well as covering special topics like security or accessibility.
  • I published one blog post per testing session and also made this personal challenge transparent in my company.
Now, did it prove my hypothesis that pairing and mobbing with fellow testers from the community on hands-on exploratory testing and automation will result in continuously increasing skills and knowledge as well as serendipitous learning? I would indeed agree. However, I will have to have a closer and more critical look when preparing to share my lessons learned at CAST and SwanseaCon.

Still, theoretically I could stop now. But I decided I'll continue to accept sessions until end of October. Why? Because I'm still learning, I'm still contributing, so it's the right place to be and the right thing to do. Going on a testing tour worked very well for me and I recommend to give it a try.

Friday, May 18, 2018

Testing Tour Stop #9: Pair Experiencing the User Perspective with Cassandra

If you haven't happened to come across Cassandra H. Leung yet, I heavily recommend to go check her out and especially her insightful blog. I follow her for some time now and she inspired me a lot, especially as someone who took the testing community by storm and shared her experiences on conferences early on. Therefore I was really glad to hear she recently moved to Munich, and even more when she scheduled a pair testing session with me.

Prepping

For our session, Cassandra asked to focus on identifying heuristics and oracles used for testing. For our convenience I prepared some sources for heuristics to generate ideas from.
I also noted down to actively look for oracles, be it what the UI provides, the product documentation, source code we might have available, or any similar applications users might be familiar with. Also, I had a selection of potential systems under test in mind that we could chose from.

Originally, we planned to do our pair testing session on-site as we're both based in Munich. Unfortunately, life happens when you make plans, and it turned out to not be feasible for us to meet at one place so we decided to do the session remotely instead.

Personae for the win!

We started by reviewing the cheat sheet by Elisabeth Hendrickson, asking ourselves which heuristics we don't use every day. As for my part, I don't work with user personae a lot, although I would like to do it more. Cassandra agreed, so we decided to go for exploring an application from a user point of view. Now which system to test? Of the products I proposed, Cassandra chose Chewy, an online shop for pet supplies. We set up a timer to support our strong style pairing, and off we went.

As our first persona we came up with Katie, a woman in her early twenties who just got a kitten for her first time. Starting off with this basic idea, we developed the persona on the fly while exploring this e-commerce website we both never came across before. Katie doesn't have lots of money as she's still a student. She does not know yet much about cats but wants the best she can get for her new pet. Katie is impatient and doesn't like to read lots of text but rather wants to see the information she looks for quickly.

As Katie, we searched the shop for supplies she would need for her kitten as well as information helping her to decide what's needed. Just by doing so we frowned many times already. Why were so many dogs displayed when searching for cat food? Why was the video offered on a cat food product page also showing only dogs? Why was the product filter behaving this way when we know them to behave differently on other online shops like Amazon? Lots of things surprised us, some made us feel lost, and a few features turned out to be poorly implemented from user perspective. Or not accessible, like using advertisement pictures with lots of text that a screen reader would not be able to cope with. Oh and have I told you Katie is living in the UK? We noticed all prices are displayed in dollars, and there was no language selector anywhere to be seen. When signing up for a new account we noticed our UK address was indeed not accepted and we couldn't event provide a country. Well, that was it for Katie.

So we decided to switch persona. This time we slipped into the role of an old bird lady. We didn't give her a name but let's call her Berta. She had birds all her life and knows how to care for them. Though retired, money is not the biggest problem for her, neither is time. She is familiar with e-commerce websites, trying to stay up to date with what's going on in the world. She doesn't have the highest education but is definitely street-smart.

Different to Katie, Berta knows exactly what she's looking for. She has her favorite brands and looks straight for desired supplies with the intent to purchase. As Berta, one of the first things Cassandra noticed was that the main menu's food category for cats offered different types of food; but the one for birds, different type of birds. What?! Would that mean birds were the food to be consumed? Might be that these kind of categories had proven to be more successful regarding conversion, but it still felt strange to us. When going further as Berta, we raised lots of questions regarding features like "Autoship & save" allowing us to subscribe to regular deliveries - but we could only choose it for all cart items applying for it, not select different options per product. Items marked as "deal" turned out to be interesting as well. First it took us some time to find out deals meant products offered for reduced prices that are only given "today". Well, as the US cover multiple time zones we wondered when does "today" end? A question to be investigated in a separate session. Another really interesting discovery was the shipping policy. The text spoke about "the contiguous US" - but neither of us was sure that the word "contiguous" even existed. Kind of funny, especially as the very next sentence was "Talk about simple!". Yep. If even Cassandra as native speaker stumbled here, it definitely was not simple, and therefore not accessible for certain educational levels. By the way, contiguous does indeed exist.

The whole session was lots of fun. We really made an effort to imagine how the persona would think and behave, always trying to stay in the role - even though as testers we noted several things around that. Even better, the session was also really productive when it came to feedback. We found lots of issues, doubts and question marks in a short period of time. Just the mere fact that many features caused negative emotions or at least confusion was a signal we would definitely have to talk about lots of things if we would be helping testing this product.

A mental note to myself: I should really slip into the user's role more often, play through scenarios, go on their journey. It's really worth it. As a reminder, here's a video I stumbled upon which makes a point of the importance of dogfooding.

How was it?

Cassandra shared that this was her first time of real strong-style pairing which triggered some questions for her at some points: "should I..?", "can I...?" Still, she liked our collaboration and also the timer we used. In the beginning, she wasn't sure if four minutes for a rotation would be enough, but then figured that we still followed our path when switching between driver and navigator roles. We really built upon each other's ideas without abandoning them. It was not about one person trying to get as much done as possible within the four minutes as that's all you got before the next one takes over. That would have been a nightmare. So, once again, collaboration was fluent.

What Cassandra missed was the option to look behind the curtain and see what's beneath the surface. She noticed URL changes when it came to the cart which we could not explain. It would be nice to explore the reasons for this, and also learn how the content management system behind worked. With more access we also could have used a different heuristic we considered in the beginning: following the data. For me, this is valuable feedback when preparing the next pair testing session. I plan to look for practice products enabling us to go deeper, like open source applications we can run locally.

What we both liked was that we did not get stuck with functional testing of forms for example, as both of us are quite used to that. We stayed focused on our mission throughout the session.

Some troubles we faced in the beginning were of quite different nature. Cassandra just got new headphones, even quite expensive ones. During the first half-hour they just refused to continue working several times, causing us to not hear each other anymore. Only a restart helped in these cases. One lesson I learned working with people from remote is that these calls are always prone to tech issues, no matter how experienced the people involved are.

Last but not least, one thing I learned during my very first session already: I am really bad at note taking when pairing. It seems the collaboration part takes all my focus away from doing it properly. The bad news: I am still really bad at it and haven't learned it so far. I guess I need to force myself and my pair to find a sort of routine also in collaborative situations. This time again, we generated so many ideas and feedback - but I hardly noted down anything, neither did we do it together as we should have. It might even have broken our flow, but it made it really hard to sum things up afterwards. What if we had simply recorded our session in addition to a few high level notes? I guess that would have been way easier to recapitulate everything.

On a Personal Note

While my testing tour started slowly with about having one session per month in the beginning of the year, the frequency of stops increased to one per week. Four further sessions are already scheduled for the next weeks. It seems one session per week is also my personal limit. Although the testing sessions are only 90 minutes, each one takes considerable time to prepare and follow-up on. A lesson I learned already: as soon as more people read about my tour and got intrigued, I had to block my calendar more and postpone the next requests to the future this way. What a luxury situation!

Friday, May 11, 2018

Testing Tour Stop #8: Pair Accessibility Testing with Viv

On today's stop on my testing tour, I had the pleasure of pair testing with Viv Richards. I got to know him via SwanseaCon. He was the first one to accept me as newbie speaker last year and gave me the opportunity to speak again at this fabulous conference this year! I'm really glad it worked out to have him as part of my tour. It was a fun session full of insights.


Accessibility - The Neglected Child

Viv left it to me to choose a topic for our pairing session. He said he would be happy to explore any area I prefer or am comfortable with as sees himself as "jack of all trades master of none". I so relate to that! Well, I decided to go for accessibility testing this time. Why? In my opinion this is a very important topic, often overlooked or postponed. I have never had the opportunity to actively work on a product where this was a requirement, or even considered in any way. I read some things about it, but really lack practical experience. To add to that, I knew Viv had experience in this area. Back when he was still in a developer role, he worked on a product where accessibility was a big topic.

To prepare for our session, I researched some pages which would help us kick it off. As shared in the post about my last stop, I don't like to limit the scope of our sessions too much. I prefer to keep enough freedom for us to explore in any direction it might lead us; the main goal is learning. Still, I'd like to have some options prepared upfront. Here's what I found.

Hands-on Testing

For our session, I decided to not go for one of the demo pages, but rather try a productive application and see how accessibility looks like in the real world. I chose the web version of the todo application Remember The Milk. I'm not using it myself, but tried it out years ago when searching for a task management solution fitting my needs.

We started the session by imagining we had no mouse available but can only use the keyboard to navigate the application. We could successfully sign up for a new account this way, but then quickly faced problems. It was not obvious at all how fields are ordered and we most often missed visual feedback where the current focus was. Viv shared that a screen reader tool would have problems with that. But even only by just not using a mouse we failed to navigate to certain fields, like setting additional options when creating a new task. As we stumbled heavily from the beginning on, we decided to switch to simulating a different kind of user experience.

What if we were just shortsighted and didn't have an optical aid at hand? We set the browser zoom to 200%. The page looked not as nice anymore, but was still fully functional. We could reach all page areas and elements. Same when reducing the zoom to a lower value than 100%.

But what if we only had one hand available (maybe carrying a child in our arms), and that might not be our usual one? I'm right-handed, so I tried to use the mouse with my left hand while using the application. Though this was slower it worked out well. Interestingly, during this time we came across functional application behavior which we would not have expected.
  • We just wanted to add a reminder to one of our todos, but doing so the application took us to the settings page. We should first define a device to get reminded on. Hm. Okay, we chose the computer. And the app instantly opted us in for all kind of notifications. I don't like it when they sign me up for everything by default, it just leaves me with a bad feeling.
  • The settings dialog showed a save button - but inactive. Why? We found it was only meant to save any changes made on the kind of notifications we'd like to receive. Not obvious, not nice.
  • Going further, we failed to define a reminder for a specific time; only days or weeks before our due date were available. For me this would be an important feature for a task management tool. But okay.
  • Then we discovered that subtasks can be sorted by drag and drop. There was a configuration menu, but it only offered one option, drag and drop sorting, and it could not be unchecked. Really strange! Only later I found that the related help text explained that subtasks can only be sorted the same way as the original task list they belong to.
Well, the drag and drop functionality triggered the next idea. What if we could not use JavaScript? Viv shared that in his experience this was a valid case. They had to first develop without using JavaScript at all, which meant they needed a lot simpler UI. To simulate this in our case, we opened the Chrome Dev Tools and and disabled JavaScript in the settings. We learned that you need to keep the Dev Tools open to make this work. After refreshing the application page, we found it could not be loaded at all. However, it also did not provide any further feedback why. At least a notification that JavaScript needs to be enabled would be needed to not get people lost.

We decided to start using tools in general to get an overview of existing accessibility problems. Viv recommended the Chrome extension WAVE Evaluation Tool. It really presented a nice overview of the current page's accessibility issues as well as explanations why these points are considered problematic. This way we found issues like missing labels for input fields, missing alternative descriptions for images, structural issues like having h2 headers but no h1 header to start from. We found that ARIA roles, states, properties and labels were provided as expected. To my surprise, the tool also pointed out that an unordered list was used! When researching later on, I learned that incorrectly defined unordered lists are easily comprehensible as a formatting element for sighted users, but present a problem if you have to rely on a screen reader that interprets them as single paragraphs without providing an outline. WAVE also offered the option to see the page without any styles as well as to test for sufficient contrast ratio between foreground and background colors. In the case of Remember The Milk some elements did not provide sufficient contrast to fulfill the AA or AAA levels defined in WCAG 2.0.

Evaluating the contrast triggered us to consider color-blindness. There are Chrome extensions to simulate this as well. We tried Spectrum and I want to see like the colour blind, both offering different display modes for the page. We realized we didn't know the technical terms to describe different experiences when it came to colors!
What helped me to get a quick overview on these different types was the color blindness table of the color-blind npm package. All in all, Remember The Milk did quite well, only with low contrast we deemed it hard to use.

Another idea came to mind: what if we saw everything but have a hard time to digest the information? Like if we struggled with dyslexia? Chrome extensions to the rescue! Also for this case we found simulators. We tried dyslexia simulator first but couldn't get it working on our application page, also when adapting its settings. We were not sure if maybe our brains sorted everything out automatically, so we tried another Dyslexia Simulator - and instantly got closer to understanding what it means to have dyslexia and view a web page! This simulator scrambled all texts constantly, we had a hard time to focus. It took us way longer time to recognize the words, and we couldn't watch it for long.

So what about screen readers? Viv recommended the free NVDA (NonVisual Desktop Access) so went for it. I was surprised by the audio feedback we received while the application was installed and set up! Of course this only makes sense. It just showed me again how much I do not know about different kind of technology experiences. Also, I instinctively used the mouse first, and the screen reader instantly commented on everything I hovered over - until Viv told me blind people would use the keyboard not a mouse. So we tried it on Remember The Milk. The speech output was very fast and I had a hard time understanding but at least could grasp some parts. Especially I understood that the output did not provide helpful information. For example, after adding a new task I heard that "the input field is empty." So what, how should that help me? Why not providing the information that I could instantly create yet another task? Well, here it showed in practice what WAVE pointed out - the input field did have any contextual label.

As final part of our session, we went half-way through a list of tip when testing for accessibility that Viv had found. We noticed that some of the listed points were considered in our application, like not labeling links with "click here", and other points had been disregarded, like the missing h1 tag. Font size was another remark triggering the idea that although we did try to increase the browser zoom, we haven't tried to increase the font size in general on the operating system. When trying to do so, we had a hard time finding the related setting in Windows 10! Seems there is only the option to scale everything at once, text, apps, and other items. I cannot tell whether this might be a good way to handle it or not.

After our session, Viv provided me his notes and thoughts. Here's what I did not mention already.
  • W3C Accessibility
  • JAWS (Job Access With Speech) - a very good screen reader
  • Another idea: Does the page have regions defined to enable a user to quickly jump to sections of the page using a screen reader? (This would have been something with more time to test in the screen reader)

What worked well, what to improve?

In the very beginning of our session we struggled with the technical setup. No matter how often I had video calls with many different persons, this just happens and I have to remember to take it into account. At first the computer wanted to install updates. Then the call could start but the microphone was not recognized. Finally this could be solved but screen sharing failed to actually show the screen. In the end it worked and continued to work until the end of our session including sharing control.

As with several previous pairing partners, we chose to pair the strong-style way with one being the navigator and one the driver, switching roles every four minutes. Although we didn't always switch in time, this worked out pretty well. Conversation flow and collaboration was once again very smooth.

The session proved very valuable for both of us. To further improve it, Viv came up with the idea to maybe rather focus on a small part of the application. Accessibility is a huge area in itself. Spending more time on a smaller feature might have helped us. Something to keep in mind for future sessions!

Viv sees accessibility as a topic similar to security. Many times we are facing lots of technical debt in these areas. We should be mindful about it from the beginning when starting a new application. I totally agree with him. When starting a product from scratch accessibility is often neglected, and then you end up with a legacy system where it's hard to build it in afterwards.

For both of us it was interesting to pair with another tester remotely and see other people's approaches. At work, we both rather pair with developers which is really valuable but has a different outcome. Viv also pairs with other testers but rather to instruct and provide support. Not being on the same level causes different dynamics. He shared that the strong-style way of pairing would help a lot, with the back and forth you really have to contribute. Often people don't see which kind of value they can provide, like when pairing with developers to write unit tests. However, there's always something to be shared, always something to offer. Wise words.

Something to Keep in Mind

If you take one thing of this post with you, then let it be this. We all experience the world around as and technology in a different way. Most people encounter the one or the other barrier when doing so. This doesn't have to be permanent, it can also be temporary or situational. So let's keep accessibility in mind to develop valuable products.

Wednesday, May 9, 2018

Testing Tour Stop #7: Pair Penetration Testing with Peter

Continuing on my testing tour, I had the pleasure to pair with Peter Kofler. The first interesting thing was that Peter does not identify as tester, but as developer. He read about my tour and was intrigued by the approach. He had gone on coding tours himself before as well as did a lot of remote pairing sessions, but normally longer sessions on programming. We were both curious what we would learn from a common testing session of only 90 minutes.

My original experiment was designed to pair only with testers of other teams or companies. So what about having a developer this time? Well, I decided to stick with what I preach: titles are just words, and roles are just words as well. We should not let them limit ourselves but contribute where we can provide value. So from my point of view, nothing speaks against pairing with another one interested in learning more about testing.

Our Topic: Security & Penetration Testing

When writing back and forth on Twitter discussing potential topics to pair on, we decided to go with security and penetration testing in the end. A huge area of expertise which is so important and which we both wanted to learn more about.
When exchanging ideas, Peter suggested to target Dan Billing’s Ticket Magpie, an intentionally vulnerable application intended to practice penetration testing. A great choice! We went for it and simply ran it locally in a Docker container, which provided us a perfect playground we could freely explore without worrying about breaking anything.

A few days before our session, Peter came up with a quite typical question I received from several pairing partners already: "Shall / must I prepare anything for our session?" I answered as always along the following lines: "You don't have to prepare anything (though I won't hold you back), I'll prepare the basis and we'll find our way together during the session." As a result of this brainstorming discussion, Peter came up with the idea to go for the OWASP Top 10 or try a tutorial on security testing. As I learned about Burp Suite in Santhosh's security testing tutorial at Agile Testing Days 2017, but never used this tool myself, I suggested the idea to learn more about it and see how far we would get with it. All in all, we had some ideas to start with, which was good enough for me.

A Successful Learning Session

We started the session with a short personal introduction. We had only exchanged some messages on Twitter but had never seen each other in person, so we needed a common ground to start collaborating from. Afterwards I explained the high level structure of my pairing sessions and that I'd like to pair the strong-style way. Peter shared he was not the biggest fan of strong-style, but would be willing to give it a try. Only when I started my mob timer application, he realized I really meant to do strong-style with frequent rotations, and shared that he normally rather uses Pomodoros where you always have a break after a defined period of time. Interesting idea for one of my next sessions! Well, we started with a rotation of four minutes but then quickly gave up sharing remote control and stopped the timer, having me keep control and trusting our communication to balance our power dynamics. As both of us already had sufficient experience with pairing, this worked out really well for us.

We had several options to start attacking Ticket Magpie.
  1. Blackbox: Explore the application looking for vulnerabilities in order to penetrate the system.
  2. Whitebox: Check out the source code and look for vulnerabilities.
  3. Tool support: Use tools like Burp Suite to discover vulnerabilities.
We decided to start with the blackbox option and see how far we got. We said we could still move on to the other options later on.

I don't want to spoil the fun of detecting the exact Ticket Magpie vulnerabilities on your own or show an easy way how to do it. I can only tell you it was indeed a lot of fun! We started out with the mission to get user access to the system, at best as an admin. And we did! :) We considered ways how to get more information from the database, and decided we would use tooling for that. We postponed it for later.

We then focused on getting access to the actual passwords, and found it was indeed feasible. Instead of mere guessing, changing parameters and sending requests one by one, we now really wanted to benefit from tooling. We experimented with Selenium IDE to record a request to quickly repeat it (but we couldn't find a way to insert values from file input), curl (but we found we were missing the required parameters to provide), and Postman (we thought about the tool's pre-request scripting functionality but didn't try it). In the end we only ran out of time due to lack of tooling knowledge.

Throughout the session, we both did some research which often provided the next idea to try. Some of the sites we found useful were the following:

Retrospective? We Wouldn't Have Done It On Our Own

From time to time, we would stop each other from going to fast or in the wrong direction. I really appreciated Peter asking me at the end of our session whether I had felt dominated by him as he learned he sometimes tended to do so. Truth is, I have to actively hold myself back sometimes as well not to dominate the other one. In our session I did not have the impression that one dominated the other, and neither did Peter. Great. In general our collaboration was really smooth, although I was cautious about it in the beginning as we did not know each other. We didn't stay at one point too long or got stuck.

It was really cool to see the practice application Ticket Magpie. I really liked the progress we made, only in the end I had the feeling that we turned a bit in circles. However, I have to agree with Peter's remark that it was also really valuable to see which tools do not help in certain situations.

We found that both of us had taken notes during the session and both of us needed them to structure where we are and where next to go. We decided to share them in a Google document and use them as starting point for our next session - as there indeed will be a next session. We already agreed on a date for it. In that next session we'd like to improve actively pausing together from time to time during the session, do note taking together, and go from there together. We agreed to not set a fixed goal upfront, just like this time; we both felt doing so would limit our exploration. As it was clearly communicated like this in the beginning, this was fine for both of us.

All good, but the main thing is: We both thought we knew nearly nothing around security. We both found we indeed did know some things already, more than we thought. We might have been able to do the same on our own, it might have just taken more time. But although we both wanted to learn more, we just didn't do it on our own. This might be the biggest benefit of pairing: Together, we tackled the topic, learned about ways to penetrate an application and practiced it hands-on in a safe environment. What more could we want?

Thank you, Peter, for a great session. I'm already looking forward to our next one, diving deeper into security and penetration testing!