Sunday, April 22, 2018

Testing Tour Stop #5: Pair Exploring with Lisa

Speaking of tours: last week at the Mob Programming Conference I talked to Lennart Fridén about his journeyman coding tour as a software crafter and his experience with these kind of tours. Just one day later, I had the chance to continue on my own testing tour. And by doing so, a personal dream came true! I pair tested with Lisa Crispin. As she and I both had been invited to the conference, we found ourselves once again located at the same place. What else would be better than making use of that and doing our pair testing session in person!

Sightseeing & Testing - A Perfect Day

The day after the conference, Lisa, Barney and I decided to go sightseeing. We explored Boston's Freedom Trail and had lots of fun together. We even tasted some delicious seafood at a wonderful Italian restaurant. Sounds like testing? Indeed, but the actual testing session only started when we were back at the hotel. Both Lisa and I were pretty tired, but as we were already looking forward to this session for so long we decided to start immediately on our return.

Settling down in one of our hotel rooms at a cozy kitchen counter, we decided to go for an application where Lisa was involved in. She knew it needed testing and that she could forward our results to the respective developers. Therefore it was training for us, but could have a real impact on improving the application as well.

For the following, imagine a web application where organizations can offer services for consumers. Four different roles were implemented.
  • Administrator, administering the system with full access
  • Service provider, offering a service
  • Consumer, consuming the service
  • An interested person, registering for news about the provided services
We decided to use Lisa's laptop, set the MobTime timer to four minutes to switch the driver / navigator roles, and set a mobile phone timer to 80min so we would have 10min left for a retrospective at the end of our session.

All set up, let's go!

At first we decided to explore a feature where Lisa knew that it had been added recently: automatic cancellation of invoices for services that had not taken place. However, already the very first steps left us wondering how the whole invoicing is supposed to work. We figured we might have to set up our test data first. So we started creating new accounts for the service provider and the consumer. By doing so, we stumbled across so many issues! Here are some examples.
  • For offering a specific service, the provider could define a start date and an end date. When we clicked on the same dates in the date picker again, the date value was removed and the field was evaluated as required. Really unexpected!
  • When we defined a start date and then changed it, we got an alert that you cannot save without assigned consumers of the service. In our point of view the form should not save at all before we confirm our entries, and it also should allow us to save without consumers we would not even know yet.
  • When we clicked on the date picker icons, nothing happened; we had to click inside the fields to get the calendar displayed. However, from many similar date pickers of other applications we were used to being able to access the date picker when clicking on the icon.
  • When we had the keyboard focus on the field before the date pickers, then added dates using the mouse, then pressed Tab to jump to the next field, the focus was on the first date picker field instead of the first field after the date pickers as we would have expected.
  • We noticed the form instantly evaluated fields while we were still typing and entering values in the field. The form did not wait until the field lost focus, which we found a really strange behavior. It made me think (or rather cry out) "But I am still typing!". This kind of fast validation was way too fast for my liking and annoyed me a lot.
  • We also raised several questions, like: "It seems you can only cancel a service which is scheduled further in the future, is this behaving as expected?", or "There is a view called 'Approvals pending' without content, but what should it show? Have scheduled services first be approved by anybody? Is this feature used at all?"
In the end we found that we could not even test the new feature. Invoices got not sent out automatically as they should, so we could not test their automatic cancellation either. We decided to switch our focus to security testing instead. And, oh my: what we found was both satisfying and terrifying. Again some examples.
  • We decided to test for access permissions. A consumer must not see any information besides their own profile. We tried if they could access invoices? No, cool. Hm, what about other views we saw when being logged in with other roles? And suddenly we found a treasure trove. A consumer could actually access lots of pages with sensitive confidential data like names, emails, addresses, payments, global settings, and so on. Some pages were indeed forbidden, but many were still accessible.
  • As a consumer, we could successfully create ourselves a discount and then use it. Even a discount of 100%. How convenient, and how bad.
  • Fortunately we did not find a way how a consumer could create their own new admin account.
  • We checked the pages that seemed secure; the actual HTTP request returned a 302 (Found). We would have expected a 403 (Forbidden). A question to ask!
  • When we opened an existing URL where we were lacking permissions, we got a related error message. However, for a non-existing address we got a 404 (Not Found). Here we reached the limit of our knowledge: is the information which pages exist and which not the sort of information hackers could make use of?
  • We found that auto-incrementing IDs had been used for building the URLs. We both knew that this is an anti-pattern as it makes information easily discoverable for hackers. If there is consumer number 21 we know that there had been 20 of them before. We can try and find how many exist. We can track growth over time. Rather use undiscoverable, unordered IDs like UUIDs instead.
While testing, we also came across things like the following.
  • When registering as interested person you needed to provide your country. The country field was grayed out, as if inactive - but it was still editable. Mixed messages here.
  • When loading the consumer homepage, a payment related JavaScript file was loaded; but why? It should not be needed at this point in time.
  • We came across several typos. This way we could also see that sometimes two different translations were used for the same label (like in view and edit modes).
  • We found inconsistent, mismatched namings for web service endpoints and pages, like "/versions" for pending approvals.
  • We saw that the copyright year was still 2017 instead of 2018.
  • The version number of the application was displayed strangely: of "0.8.6.4" the two last digits were displayed lower than the rest. Also, the mouse pointer changed to a hand when hovering over the version number as if it was a link, but nothing happened when clicking on it.

In Retrospect

In the beginning, both of us were super tired. But as soon as we started testing together, the time flew by, we got into a flow state and all tiredness was forgotten. Before we knew it the phone alert went off, reminding us that our timebox was over and the retrospective was due.
All in all the session was really fruitful - and super fun! We both learned new test ideas, tricks, insights, and even provided actual value with our results. Meanwhile Lisa also informed me that the forwarded issues had already been fixed - awesome!

From what we saw, we made some assumptions about the application development. Our biggest assumption was that the involved developers worked solo on the application, not sharing much among themselves. What made us think that? We saw inconsistent styles throughout, like differently colored save and cancel buttons, different labels, features only implemented in some views but not in all, etc. We also saw features we did not understand and did not fit the bigger picture; why would they be offered at all?

Although Lisa and I never tested together before, our collaboration was super easy. Also, the mob timer supported us again really well. It gently reminded us to change roles without disturbing. We sometimes caught ourselves to mix up our navigator / driver roles, but we both instantly noticed it ourselves and corrected it immediately. We found sometimes it's hard to resist and that we need further practice, but that the roles are really valuable and our few mistakes did not disturb the session at all. Our assumption regarding our smooth collaboration: pair presenting helps pair testing. And probably also the other way around! Although we didn't test together before, we already had worked together when preparing our workshop for Agile Testing Days 2017. We talked a lot before, we knew each other pretty well, trust had already been established. This definitely helped a lot.

An observation: We did not take common notes of our session. Instead, Lisa made notes for herself to forward to the developers, I made notes specifically needed for this blog post. This only worked out as we waited for each other to finish taking notes before we continued testing. Still, we could have simply taken notes together, without any waiting time. (I should have learned that already from my exploration session with Maaret.)

Equipment is really important when pairing or mobbing. We only had the one laptop with the English keyboard and a small Enter key (which caused me to stumble frequently). We also only had the one laptop screen. If you come into each other's space, it can quickly get uncomfortable. Lisa shared that at her current company Pivotal they try to remove any barriers that could keep people from pairing. They get the keyboard and mouse they want and can carry them around. They also have two big screens to pair, one for the task and one for the video call when remote pairing. People have great noise-cancelling headsets. Everything's made easy for collaboration.

Lisa and I found we complemented each other. We built upon each other's ideas and never got stuck. When doing this alone we would have wasted way more time thinking what we could do next. Furthermore, we kept each other on track and in scope (like Maaret and I did as well in our session). We called it out when one of us wanted to divert from our focus, making clear that this was not important right now. At the same time, we also went deeper ("Let's try one more!") when one of us would not have gone there and found many issues by doing so. We felt that pairing provided more freedom and less anxiousness what to try next. We were way faster like this. Also, structure was really important, especially having the retrospective at the end.

Getting Inspired?

All in all, our session was super fun - thank you Lisa! Now I got already excited about the next stops on my testing tour. It seems some people read Lisa's advertisement (thank you!) as I received further session requests. For example, Jean Ann Harrison already announced her interest about a mob testing session with Lisa and me. That would be awesome! Also, Alex Schladebeck and Cassandra H. Leung scheduled a session - really looking forward to those! In any case, I hope even more people will get inspired and schedule a pair testing session with me.

No comments:

Post a Comment