Monday, August 20, 2018

CAST 2018 - A Community Adventure

Two weeks ago, I had the great opportunity to be part of CAST 2018, taking place in Cocoa Beach, Florida. I had heard great things about last years' events, and indeed, it turned out to be yet another wonderful conference!

The theme was "Bridging Between Communities", a theme I really loved. It is great to exchange experience with other testers working in a similar context. It is also great to share knowledge with testers working in different ones. And it is great to learn from people who do not identify as tester as well! Maria Kedemo was program chair and created a wonderful offer. It was really hard to pick a session and I would have loved to go to each and every one of them. Thanks a lot for the diversity provided.

Organizers in general were great and really supportive. Together with their volunteers they did a great job! I felt very welcome, communication was easy, and facilitation and support during the conference was awesome. Especially as a speaker this part is nowadays really important for me. Though we had some short-term changes in the schedule or regarding rooms, everything went smooth and without major problems.

Getting to a conference as speaker is a great thing for me in any case. It's not only that I get the chance to share my experience and give back to the community from which I received so much myself, or that I receive the opportunity to level up my presentation and training skills, but also that I learn so much from these conferences myself. I learn as an attendee, I can reach out to fellow speakers way more easily, and I get approached myself by other participants myself. As I identify as introvert at heart, this helps me so much to connect with people and learn from them. Thank you for making it so much easier for me.

At CAST 2018, there were so many wonderful people with whom I enjoyed many insightful conversations.
  • Lisa Crispin. I consider myself really lucky that we had so many opportunities over the last year to meet and exchange thoughts. I enjoy all our conversations and am already really happy that we are going to meet again at three more conferences within the next half year!
  • Marianne Duijst. Thank you for listening, for providing constructive feedback, for facilitating the questions for my talk, for your most wonderful sketch notes, and last but not least for spending a full day with me after the conference doing awesome touristic stuff! I am so glad I got to know you.
  • Ashley Hunsberger. It was such a great coincidence that we had a first call just a few weeks before CAST! It was even more lovely meeting you in real life, and just picking up things from where we left them. Thanks for the evening spent talking about anything and everything.
  • Lena WibergTomas Rosenqvist. It was fabulous to meet you in person, and that even already on the plane to Orlando. Thank you for sharing your experience. You gave me so much food for thought!
  • Angie Jones. Finally I had the opportunity to join one of your tutorials! It was great. Even better was the opportunity to talk with you. Thanks for making it easy, as well as for your feedback! :-)
  • Amit Wertheimer. It was great meeting you again, I really enjoyed our deep conversations on so many topics! Already looking forward to the next chance.
  • Jan Eumann. There are still not too many people who already experienced mob programming, mob testing, or mob anything. I really enjoy sharing thoughts with those who did! Even greater considering the fact that we now submitted a proposal for a mob session at another conference together.
And there were so many more lovely people! Maria KedemoLouise PeroldAnne-Marie CharrettJenny Bramble. Richard Bradshaw. Alex de los Reyes. Bailey Hanna. Many more great persons with whom I loved exchanging knowledge during tutorial, workshops, talks, lunch, or a bus ride.
If you came that far in my post you might think this conference was all about people. Well, I think conferences are indeed about the people, just like software development is. Still, those people also shared great content. I learned a bunch again and took several ideas with me back to work. Here are my highlights of each conference day I attended.
  • Tutorial day. The tutorial "Advanced Automation for Agile: UI, Web Services, and BDD" by Angie Jones. She taught everyone how to create a well designed automation framework, shared her experience, demoed what she told, and made sure everyone shared the same page. At times I perceived the pace as slow, yet "slow allows for thoughtful thinking" to quote Maaret Pyhäjärvi. It felt good to learn that I know more than I thought I would. That I could easily follow and still had time to help others, and was able to do so.
  • Conference day 1. This day is tricky, as I had two favorite sessions. The workshop "An Exploration of Observing - Creating system awareness in our quest for quality" by Louise Perold was great to practice our observations skills hands-on and especially debrief what happened and what we perceived. And the talk "Risk Based Testing: Communicating Why You Can't Test EVERYTHING" by Jenny Bramble was so entertaining that I even forgot that I was next! It also made obvious how important communication, talking about risk to guide our testing, and team morale really is.
  • Conference day 2. Finally I could hear Marianne Duijst's talk "Wearing Hermione’s Hat: Narratology for Testers". Learned a lot about biases, perspectives, trust when it comes to information, as well as enabling others. It was simply awesome. If you get a chance to hear it, take it. It should have been a keynote for everybody to hear.
Well, and I also had two sessions myself. I gave my brand-new talk "Cross-team Pair Testing: Lessons of a Testing Traveler" for the first time at a conference, speaking about my testing tour. Although my timing was not working out well enough, it seems people did not notice and still felt it to be consistent. After the talk, there was a facilitated discussion which I really feared in the beginning - but it went well and people asked many great questions so we could dive a bit deeper on the topic. And the best thing: Alex de los Reyes came up to me after my talk and told me that he really related to it, shared my fears and wanted to pair with other testers for some time himself already. Now my talk was finally the trigger for him to actually do so! Amit Wertheimer instantly joined in, and both agreed to have a pair testing session. How awesome is that?! It seems more people came out of my talk inspired. That was my goal and I am really happy about this kind of feedback. What more could one want?
In addition to my talk, I gave my workshop "Mobservations - The Power of Observation in a Mob". I had less time to prepare myself for it as expected as schedule slots were swapped, so I felt less energized and the session less organized as it could have been. Still it went out okay, people got value out of it, and I also received constructive feedback how to improve it further. As Louise Perold shared with me, workshops are never perfect, they can always be improved. And they depend also on the people you have and your skills to adapt to them and the context.
In conclusion, the people were great. The conference was great. And then also the social activities around were great! For example, the sponsors enabled most of the people to visit Kennedy Space Center where we all had dinner together. What a location for that!
As a bonus: there were two rocket launches around the conference dates - who can say they offer that?!
Last but not least: I have the most awesome team ever.

Saturday, July 21, 2018

Testing Tour Stop #16: Pair Exploring an API with Thomas

Yet another stop on my testing tour? Yet another stop on my testing tour! Yesterday I learned together with Thomas Rinke. We shortly met each other at Agile Testing Days 2017 where we had a great conversation over lunch. Like many of the other people I paired with, Thomas is really engaged in the community as well, something I really appreciate. He is on the conference board for the German Testing Day, and also actively helping his colleagues grow. He already did what I wanted to do with my own company's testing community for a long time: watching great conference talks and discussing them together. Still on my "to try" list!

The Preparation Phase

About a week before our session, we started brainstorming and aligning ourselves on our topic. Thomas was up for exploration as well as automation. I proposed to go for an exploratory testing session, pairing the strong-style way. Thomas shared he never experienced it himself but was intrigued to give it a try. He had come across Alan Richardson announcing Thingifier, his new demo app for API testing. Awesome! I wanted to have a look at that myself already, so we chose it as our system under test. We directly went for version 1.1, however, as the previous one did not start up successfully on our computers.

In case you haven't explored an API yet, I recommend Maaret's fantastic article Exploratory Testing an API. For our session we agreed to use Postman as our tool of choice and to run the app on my computer, sharing screen control. Have I mentioned already how much I love Zoom for not only offering great video quality and recording functionality, but also free control sharing that's easy to use and actually works?

The Session Introduction

We started out setting everything up and getting any open questions out of the way. Interestingly, Thomas shared with me that he was nervous and excited, thinking "well, what could go wrong, it's going to be great". Especially testing with "strangers" was something extraordinary, and we didn't know each other so well yet. The thing was: I was nervous, too. I always am for my pair testing sessions. Fear is even one of my reasons that I started the whole testing tour. Doing this is still scary for me. However, I already see that it is continuously getting better over time. "If it's scary, do it more often" really applies for me here.

Thomas also shared he prepared for the session upfront. For example, he read Llewellyn Falco's blog post about strong-style pairing and listened to Woody Zuill's podcast episode about mob programming on Agile Uprising. Both really recommended!

I learned Thomas was running Windows as operating system, so I advised him to rather not use shortcuts as they are mapped differently on macOS and this doesn't work easily when sharing control, as I had learned in my session with Pranav. Due to the preparation done beforehand, everything could be quickly clarified so we could activate the mob timer and start testing together.

The Testing Part

Thingifier is described as a "hard coded model of a small TODO application". Our first goal was to get an overview of the capabilities of the application and understand its model. We started out using the given documentation and sent first requests to see how the application worked and behaved. Interestingly, later on I saw that the app's release note for version 1.1 said "When the app is running, visit http://localhost:4567/documentation in a browser to see the docs." This was in contradiction to the actual documentation that could be found directly on "http://localhost:4567/".

Thomas started out as navigator, but when it came to him to take over keyboard control as driver, we found that Postman and Zoom screen control sharing did not work well together. He could do right-click actions with his mouse, but not left-click. We discovered that it worked in Chrome, however, and when going back to Postman it now worked here as well. Would be an interesting topic to explore itself!

When getting to know Thingifier's Rest API, we came across the following findings and starting points for conversations.
  • The documentation spoke of todos and tasks. On first scanning over the documentation, it looked like these terms had been used for the same thing, so it seemed like an inconsistency. However, on second read it became clear that you had standalone todos on the one hand, and projects that have todos as so called "tasks" assigned on the other. Nonetheless, this naming differentiation caused us to stumble. Later I found that when using a category guid as resource instead of a todo guid, it's successfully accepted as task, but then calling the task collection returned a 400 (Bad Request).
  • When adding a task to a project, the request returned a 201 (Created). Repeating the same request did not add the task a second time which was good, but the request still returned 201 (Created). We considered that not critical but also not completely clean, we would have rather expected a message that the relation already existed.
  • We stumbled a bit regarding the naming of resources. For collections the singular form was used (e.g. "/todo"), for sub-collections of an element the plural form (e.g. "/todo/:guid/categories"). We wondered whether there was a related naming convention for Rest APIs. Intuitively we would have used the plural form for all kind of collections, as the request was returning an array of all items. For example "/todo" returned an array of "todos", so we would rather have named it "/todos". Further research showed that it indeed seems to be best practice to use the plural form for collections.
  • For testing purposes we would have loved to see existing relations in the request responses, e.g. when querying a project to see its assigned tasks and categories.
  • Creating a todo created it with a random guid, and it was marked as not completed by default; just as expected.
  • When checking all given relationships between entities, we found them to be inconsistent. Categories and todos were implemented in both directions, so you could see all todos of a category as well as all categories of a todo. However, for example, you could see all projects of a category, but not all categories of a project.
  • We had already tried three of the four offered relationships, and thought everything seemed okay. To complete our understanding we tried the fourth as well - and it failed. "/todo/:guid/categories" resulted in a 400 (Bad Request). Fun fact number one: It's so often the last thing we try that unveils an issue. Fun fact number two: I could not reproduce this issue anymore when writing this blog post; it seems we missed something here.
  • When querying a collection, we would have liked to directly see in the response how many elements were returned, at least to help testing.
  • When requesting a resource that did not exist, we received an error message as expected. Its wording was quite technical but expressive.
  • We deleted all todo items and then queried for the collection of all todos. The request returned only an empty response without value ( i.e. {} ). We would have preferred to receive an empty array of the queried entity instead to keep the response expressive (i.e. {"todos": []} ).
After we had gotten more familiar with the application, we decided that it was time to go deeper on a certain part of the API. We agreed to have a more detailed look at creating todos as we perceived it as core part while still being small and graspable.
  • Sending a request with an empty body returned a 400 (Bad Request), as expected.
  • When providing the framing brackets {} as body we received the error message that the field "title" was mandatory.
  • When providing a body with the title field set to an empty string, we received the error message that the title must not be empty. Providing a blank resulted in the same message.
  • Providing a dot (.) as title worked, just as the Polish special character ł. Previously I learned those additional Polish characters are not included in the basic Latin character set so they can can cause problems e.g. when storing values in a database.
  • Providing a double quote (") returned a MalformedJsonException. We tried to escape the character, using a backslash (\), but it returned "Expected ',' instead of '"'". We wondered how to escape a character in a REST API request body but left that for future investigation.
  • We provided a quite long title of more than 256 characters. The request returned a 201 (Created) as expected. However, when then querying for all todos, the request only returned "Expected ',' instead of '"'". We found that this was not related to the long value provided, but due to our previous tries to escape characters. We indeed had to restart the app to reset the given test data.
  • We switched to the guid field. We found that if the guid is explicitly specified on todo creation, the given one is taken. In case the guid already existed, the todo is not created again but the existing todo is updated. We discovered that there seemed to be no validation regarding the length of the guid. Later I found you could even provide an empty string as guid which made it impossible to request the specific todo.
  • According to documentation, both POST and PUT methods were offered. However, their purpose felt mixed up. Both should create todos in case they did not exist yet, and both should update an already existing todo. However, I found that my gut feeling is mixed up here as well, as I thought in terms of HTTP, however REST only demands a uniform interface.

The Retrospective

As for all test sessions, we left some time for a short retrospective at the end to reflect how it went and what could be improved. Thomas started with sharing his thoughts, and by doing so provided wonderful feedback! Here's what I understood.
  • It was a lot of fun and overall a great thing.
  • He had seen on Twitter that I invited basically the whole world to pair test with me and thought that was a really cool thing. He told himself he could not lose anything there and wanted to join in.
  • The communication beforehand was great. He really liked that I reached out in time upfront, about one week before our session. That we clarified the technical setup, aligned on the topic to pair on, agreed on the style of pairing. That both of us could contribute when choosing our topic and how we wanted to collaborate.
  • He learned something during the session which was great. He had the feeling that it was also worth it for me, not that I thought "oh no, Postman, exploring an API, how boring".
  • He found that I was open, friendly, and welcoming. He really liked that I was looking forward to our session myself.
  • The session offered lots of first-timers for him. It was his first time of using Postman, first time of having a REST API, first time of trying the strong style way of pairing, first time exploring an API. He considered these quite a lot of firsts and therefore felt triggered to prepare for the session which he thought was good. He felt a certain sense of commitment and bindingness to do so. He wanted to explore an API for so long already, and our session triggered him to really do it now.
  • Thomas said the execution of the session and our alignment were good as well. Strong-style pairing was really great, too, he had a good feeling about it. He shared that he felt the trust given and had trust in me himself. He referred to Llewellyn's blog post which emphasized the importance of trust as well.
  • What was not completely perfect from his point of view was that we worked on different operating systems. Thomas felt he was slowed down as he always had to use the GUI for operations like copy and paste. This was not optimal as he felt that I was so much faster when I was driving. He said that the upfront warning about the mismatched shortcuts was great, and in the end this was okay for him, but he still felt I was always faster.
  • Thomas valued the results of our session. We found some behavior working as expected, some surprising things, and some classics.
First of all I had to thank him for his great, structured, constructive feedback. It is so helpful to receive detailed feedback also on the positive things, showing which things are going in the right direction besides those that could be improved. Really appreciated!

Besides that I shared that for me nothing is too basic to pair on. I learned that I always learn something, especially with different partners who bring different perspectives. Sometimes it's great to see a different approach, but sometimes it's also great to see when we share approaches or think in similar directions. I started out on my testing tour because I did not know where I stood regarding my own testing skills and knowledge, if what I was doing made sense at all.

We both shared the approach to first start with the base frame and then dive into the details. As Thomas added, there's also the other approach of going fully destructive at once. However, he normally likes to see the happy path first, how the application was intended to behave. He made better experience to first provide feedback around that to developers instead of instantly coming up with rare strange cases causing the implementation to fail.

The whole communication with Thomas upfront and throughout the session was so easy, I really enjoyed it. Furthermore, I really appreciated that Thomas created a safe environment for both of us from the very start of our session. He shared right at the beginning that he was nervous and excited, which made it really safe for me to share my thoughts and feelings with him and put everything on the table. Neither of us had a problem to call something out during the session, as this trust had been established beforehand already. 

Thomas shared that he loves to experiment and try new things in order to see which of them work. He also wanted to give mob testing a try, especially after having experienced strong-style pair testing now. With the strong-style approach you don't have any chance to lose focus, it's simply awesome to have two people or even the whole team fully concentrating on the task at hand.

The Inspiration

In the very end, Thomas gave me some feedback on my whole tour and what I am doing. He said it's a real inspiration for him to also get out of his comfort zone. For example he had recently decided to invest the time to join the Tuesday Night Testing and really valued the event, having great conversations and experience exchange. Honestly, this is so great to hear as it's exactly what I am trying to do: show people how to safely get our of their comfort zones and how valuable this can be for their personal development as well as for the rest of us learning from them.
Thomas also presented the example of Kim Knup, who just decided to tackle Alan Richardson's note taking experiment and made this decision a public commitment. This created the same sort of bindingness. Thomas shared he really wanted to use some of his time focusing on learning. We both encountered people who do not take any time to learn, not even half an hour. Many seem not to belief they are entitled to use their time for learning, "away from their tasks", a few might also not want to grow (which is a pity). Thomas said that sometimes we should do the things that we think are great fun, and sometimes we should do things that are on the edge of our area of responsibility. I so much agree.

I am already looking forward to meeting Thomas in real life again at the next Agile Testing Days end of this year. We are both going to join the Web Application Security tutorial by Dan Billing. Also, Thomas plans to join Toyer's and my workshop of Finding a Learning Partner in the Testing Community and hopes to find an accountability partner for himself there! So awesome. I really hope he will, alongside many more.

Wednesday, July 18, 2018

Testing Tour Stop #15: Pair Evolving a Test Strategy with Toyer

The testing tour continues. Today I had the honor to pair test with Toyer Mamoojee. Since end of 2016, when we agreed on our first pact to try ourselves as conference speakers, we are having a call once every two weeks. We talk about all things testing, exchange experience, trigger thoughts, provide feedback and support. Still, we haven't ever tested a product hands-on together. You can imagine how happy I was when Toyer took the chance and scheduled a pair testing session with me on my testing tour!

What topic to pair on?

That's one of the common first questions that come up after scheduling a session. In this case, Toyer said he would love to do something from scratch, gather test ideas together, align our thinking. We know each other and our viewpoints quite well, but we haven't yet practiced testing together. He wanted to see how we really go about testing certain things, ask questions and see each other's thought processes when actually testing.

Based on his input, I thought let's tackle an application we both don't know, explore it and come up with a very first test strategy based on the gathered knowledge. As I knew that Toyer discovered mind maps for his work some time ago and learned to love them for many purposes, I thought that this could be our way to document our very first strategy draft, keeping it lightweight, easily editable, visual.

Having a list of potential applications to tackle at hand, I reached out to Toyer and asked whether he wanted to learn more details about my idea before our session, or rather not. He answered "I'm tempted to say share more information.. but I would like to be surprised too, as I want to see how I can tackle something without preparing". So he chose the latter option to which I can really relate. Over the last years I am continuously learning to tackle things without over-thinking; and I'm not done with learning this yet.

Evolving a Strategy While Exploring

At the beginning of our session I presented my topic idea to come up with a test strategy for a new product, and Toyer agreed to go for it. Lucky me, otherwise we both would have had to cope with an unprepared situation! ;-) So I provided different options as our system under test of which Toyer chose Blender, an open source 3D creation tool. I had some rare encounters with this application back at my first company when we developed an AI middleware for game developers, but had hardly touched it ever since. Toyer thought it looked really promising as we normally don't get to test these kinds of applications.

Toyer shared that first of all, he would ask which kind of need this application wants to fulfill and do related research upfront. For the limited time box of our session, however, we decided to skip this and explore ahead. Toyer accepted my suggestion to draft our strategy in a mind map, so we created it and continuously grew it according to our findings. He also agreed to do strong-style pairing while exploring, so I started up my favorite mob timer, set the rotation to four minutes, and off we went. It quickly became clear that we knew each other well. Collaboration was easy and communication fluent. We could fully focus on exploring Blender from a high level point of view, trying to grasp its purpose and main capabilities, identifying limitations and potential risks. We were actually doing a recon session, just as Elisabeth Hendrickson describes in her awesome book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing.

Throughout our session we gathered lots and lots of findings and discoveries, adding more and more important points to our test strategy.
  • Learnability. The application is not intuitive at all. It's a real expert tool. Still, everybody is a first-time user once, and even if you know the domain the user experience this product offers is not that great.
  • Functional scope. The more we explored, the more functionality we discovered. The whole tool seems really powerful, but again, is not easy to understand.
  • Screen resolution. The GUI is cluttered with many UI elements, sidebars, popups and more. On our laptop screen that was already a challenge, and it will still be one on larger screens.
  • Usability.
    • Menus, popups and tooltips looked very similar which made it hard to distinguish the purpose of each.
    • Feedback on actions was often missing or confusing.
    • Some sidebars displayed content related to views we previously visited, not getting updated with information of the current view. This way, they sometimes obscured needed information.
  • Consistency.
    • Some actions worked in one area but not in the same way in another.
    • Some sidebars were named x but the header label said y.
    • Some delete actions asked for confirmation, others just instantly deleted the item.
  • Portability. We tested Blender on macOS. The product is also offered for Windows and Linux. At several points we found strange unexpected behavior and made the assumption that it might have been due to porting issues for the macOS version. For some points, I could even confirm that assumption when writing this blog post and checking the Windows version of Blender.
  • Maintainability and reusability. The GUI offered many sidebars, popups and views that shared similar layouts and menus. We noted to investigate whether they were duplicated or re-used components.
  • Robustness. We encountered error messages on invalid input that was not caught or prevented.
  • Automatability and testability. The application offers a Python API. We found Python commands offered in tooltips, the API reference in the help menu and an integrated Python console. The console behaved differently than we knew it form other terminals, but still it was very interesting that you could automate operations; which would also increase the product's testability.
  • Discoverability and documentation. The help menu offered an operator cheat sheet; clicking on it triggered a temporary message to look at the OperatorList.txt which we could not find. Only later I learned that we had not come across the text editor where you could open named text file. What a hidden support feature. Also, we found the linked release notes page to be empty. We didn't dive deeper into the manual, but all available documentation would have to be tested as well, especially for an expert tool like this.
All in all, we made our way through many parts of the application. We made quite some assumptions. And we found we still haven't seen a lot yet. In the end, we didn't have a final test strategy, but a good starting point to be iterated over.

Time to Reflect

We covered a lot in the limited time. We gathered lots of insights, ideas, assumptions to verify. We tested a product in a domain we both don't know much about, being a desktop application instead of our usual web applications. We tried to gather information and keep a holistic view on things, not diving deep yet, focusing on uncovering the different aspects to test for such kind of a tool. All the way mapping out our world and the points to tackle in our test strategy. As we learned more, our strategy evolved. We didn't reach an end by far. If this would be our product to test, we would iterate over it while learning more.

The unknown domain had its own charm. We approached the product as a black box, not looking under the hood in the first place. We brought lots of testing knowledge, but quickly saw we lacked the domain knowledge. Toyer made an important point here: when hiring a tester for such kind of a product, it would be best to look for someone who was already exposed to these kind of tools or related areas of expertise. We could still provide lots of value and ask questions which might go unasked otherwise, but we would quickly pair up with a product person or business analyst to model the product from domain point of view. And also sit with developers to model the product's architecture.

Pairing up helped once again a lot. To see different things at the same time, by looking at different parts of the screen. To grow the mind map way faster than any of us would have done on their own. And to include different thoughts and viewpoints.

Enrich Your Experience

This was the fifteenth stop on my testing tour so far. In the beginning I had only planned ten sessions, one per month from beginning of this year until end of October. Although I already overachieved my initial goal, each further session enriched my personal experience and brought me in contact with different approaches to learn from; all that while practicing my skills hands-on. Right now, I am reflecting on my whole journey so far as I am crafting a talk about my experiences on this tour which I have the honor to give at CAST and SwanseaCon this year. And just while doing so, another tour stop was scheduled with me, further people indicated interest to pair up or listen to my lessons, and I'm having further test sessions with awesome people. I'm curious where else this will lead me. What a wonderful time.

Tuesday, July 3, 2018

Testing Tour Stop #14: Pair Evaluating a Visual Regression Testing Framework with Mirjana

Yesterday I enjoyed yet another stop on my testing tour. This time I was joined by Mirjana Andovska. In the beginning of our session she shared that she was the lone tester at her company, just as I have been for a long time. She saw my tweets about my testing tour and knew she had to try that as well, so she signed up. How awesome is that?!

When asking what she would like to pair on, Mirjana answered automation. I found out that she has a lot more experience in automation than I have, so I was really glad that she wanted to pair with me. She was especially interested in visual regression testing, mutation testing, and automated security testing. All great areas of expertise I would love to learn more about! In the end we decided to go for visual regression testing. Mirjana was so nice to even prepare playground projects so we could start our session right away and get to know the available frameworks better.

Learning While Walking Through

As Mirjana kindly took over preparation, our first step was to walk through the available projects and see what's already there and what she learned so far. She had revived an older playground project of hers using the Galen Framework, as well as created a new project to try out Gemini as visual regression testing tool. Unfortunately, it wasn't as easy as expected to get Gemini running on her computer, as it shows a dependency towards node-gyp which has a known issue on her operating system Windows 10 requiring permission to install further software.

We decided to go with the Galen playground project first and learn more about this framework before maybe trying to get Gemini running on my laptop running macOS. But first of all: why Galen and Gemini? Mirjana referred to a quite recent overview of the different frameworks available for visual regression testing. Based on her team's needs and what she read about the tools, she found that Galen and Gemini looked most interesting to check out first.

The playground project Mirjana provided was based on the getting started documentation and the first project tutorial for the Galen Framework. She had already implemented a first test for a sample website. We only had to run Selenium locally as standalone server, execute the tests and see how it was working. Just by Mirjana walking me through the playground project, we both learned more about how the framework works, while extending our knowledge on the way.

Which Framework to Chose?

The main purpose of our session was to learn more about the Galen framework and evaluate it. We wanted to discover pros and cons, as well as learn the answers to the following questions.
  • How easy is it to set it up and configure?
  • How easy is it to write a test?
  • How easy is it to maintain the tests?
  • How easily can you understand the resulting reports? Can you quickly see what is wrong so you can fix it based on this information?
With every step we found out more. Here's what we learned.
  • The spec files included CSS locators for objects as well as the test specification. We noted for later to find out whether we could also have the locators separate from the test specification.
  • Galen takes screenshots of the whole page as well as of the single elements to be located. Using images of only part of the page for comparison was pretty nice. However, when looking at the different menu items of a sample navigation bar, we found that the images were cut at different places, sometimes even cutting the menu item text. We felt this was quite strange, so we added it to our list of things to be investigated later.
  • The test report can be defined. We tried the html report, which included a heat map visualizing what the tests covered; pretty nice. However, the report captured the console output only from the framework, but not from the application itself which would make it easier to see the information needed to reproduce a bug.
  • The test run didn't close the browser in the end, so we noted that we would need to take care of this ourselves.
  • We wondered how to add more specs in one test runner file. We postponed this question for later investigation.
  • We learned that we can specify tests in JavaScript as well as in other languages like Java.
  • We saw options to test for desktop or mobile. We decided to not dive deeper here for now.
  • We found that it's really easy to run the tests not only locally but also on different servers, or on services like BrowserStack or similar.
  • We ran the tests on BrowserStack, and the tests failed due to differences. At first we assumed that the differences were due to the different operating systems Windows 7 and 10 as the Chrome version was the same. However, when looking at the compared images, we saw that the expected version showed a scrollbar where the actual image did not.
  • This led to the question how the expected comparison image was created. Maybe on first run? Maybe when running the "dump" command we found? Or did it take the expected image from the last test run?
  • We had a deeper look at the command to create a page dump.
    • The Galen documentation told us that "With a page dump you can store information about all your test objects on the page together with image samples."
    • We ran the command and waited. With time passing, we began to wonder whether it was still running or not, especially as it didn't provide any log output anymore. We decided to let it run a bit more and found that it took about three minutes.
    • We learned that the dump command, unlike the test command, did indeed close the Chrome browser after it ran through. The dumping processes generated lots of files: html, json, png, js, css files and more.
    • We discovered that the dump report gave us spec suggestions, but only if we selected two or more defined areas like the viewport and a navigation bar element. It seemed the provided suggestions should always refer to how an area related to another one.
    • Mirjana thought this would come in handy when a developer missed to align all navigation elements. I added that we could use this feature to explore the page. As the tests are quite slow in nature, we might only automate a part of them as a smoke test and explore around that manually.
    • If you'd like to have a look yourself, here's the official example page of a dump report, ready to be played with:
  • Checking out the official documentation, we discovered the "check" command to run a single page test.
    • Here we learned that there are also options to create a TestNG, JSON or JUnit report.
    • The html report resulting from the check command showed a different structure than when executing "test", which we found interesting.
  • We still had not seen how the reference images were really created and wanted to test our assumptions. Sometimes you need to recheck the basics also at a later stage.
    • Documentation told us that the dump creates images that should be used for comparison.
    • When experimenting, we found that for "check" the stored reference images were actually used for comparison. However, when running the spec files as "test", it seemed to take the image of the last run as reference. Might this be a bug? It's always interesting to check out the reported issues when evaluating a framework.
At the end of our session, I asked Mirjana about the pros and cons she sees regarding the Galen Framework from what we learned so far.

LikedSo-soDidn't like
  • Easy to change configuration and run parameters, providing the flexibility to run the tests anywhere and in different combinations
  • Specifications for objects
  • Suggested specs feature; she shared that I opened her eyes for how to use it for exploring as well
  • Java or other options to write tests; depending on qualification or the type of project it's good to have the freedom to chose
  • Easy to run on Jenkins
  • Easy framework setup
  • Really nice documentation
  • A little bit confusing tests, maybe related to not being used to JavaScript too much
  • JavaScript files as additional layer of configuration besides the Galen config, the run command and the spec
  • Structure of the tests; we kind of tackled them but need to read more as we would also need to maintain them
  • You have to be careful with indentation when writing tests as you get an exception if it differed from expectation; especially if you use another editor for fast editing
  • Different reporting depending on the command used

All in all, Mirjana shared that from her perspective much depends on the purpose of these tests in the first place. Each test should give us an answer to a question, manual as well as automated. Sometimes it's not a yes or no. Developers might not be used to that, but reports can give us much more information than true or false, like a percentage of how much the result differs from the expectation.

As shared earlier, Mirjana had also set up a Gemini playground project. However, the test project failed to run due to missing dependencies which need to be installed what is not easy if you lack the required permissions. Also, it was hard to search and find this kind of information. She allocated the same timebox to explore both frameworks, but she didn't get as far with Gemini as she did with Galen.

In her experience, you only have a limited timebox like one day to try things out. You don't have infinite time or budget or resources. This is usually an important parameter when evaluating frameworks. However, it also depends on the skills of the one who tries them. This means, if you don't have the most knowledgeable person do it, it's a trade-off. I think that this is a puzzle you cannot solve as you only discover what kind skills, knowledge, experience you really need while you're already at it, not before. Mirjana gave the valuable advice that when you want to evaluate a tool, you need to have a familiar environment as system under test. At best it's a real project which is very simple so you can write simple tests, otherwise you will lose time inspecting the elements and learn about the application.

The Value of Learning Together

In the middle of the session Mirjana asked me: "Did you check the time? Because I'm having fun! :D" In the end, we spent 150 minutes testing together instead of the originally allocated 90 minutes.

I found the session really interesting and valuable. Mirjana had it really well prepared which I really appreciate a lot! So far I had only watched demos of visual regression testing but never saw it running for an actual product. By pairing up, I now had this opportunity and I'm thankful for it. In my experience I get way farther in less time this way than when I'm on my own.

Mirjana shared that in between her daily work she looks for ways to make her life easier. She tries to experiment, see what is new, what we can do more. Visual regression testing was one of the topics on her todo list. She had started the Galen playground project some time ago and realized by coming back to it now that she had grown a lot and learned a lot more since then. She hasn't seen the actual use of such a framework yet either, and also setting up a JavaScript project is not her everyday work. By doing it together I gained more insights about visual regression testing from her and could give more ideas back to her. Now we both have a starting point. And that's awesome.

Monday, June 18, 2018

Testing Tour Stop #13: Pair Penetration Testing with Peter

Today I stopped by Peter Kofler again on my testing tour, picking up our penetration testing endeavors from our first pairing session. We agreed to stay with Dan Billing’s Ticket Magpie as our target system. Last time we left with multiple options how to continue: use tools for automation, or try another attack like cross-site scripting, or explore the source code for vulnerabilities. Peter had come across sqlmap, an "automatic SQL injection and database takeover tool". It looked promising so we decided to give it a try and focused our pairing session on it.

We started out by getting sqlmap to work, installing the required Python version and cloning the GitHub repository. This went nice and easy, without any issues.

Next step was getting to know the tool and figuring out the many options it provides. We already learned last time that the shop's login form was vulnerable to SQL injection, so we used this as our target. From this starting point, we learned step by step what we needed to provide and what not, what single parameters do, how to see what the tool is exactly trying out.

It started out easy - and then it stopped being easy. The nice learning curve we had in the beginning started to flatten out. Whatever we tried, the tool always told us that the login form does not seem to be prone to SQL injection. But why? As humans we could see that our target provided us the information needed to identify existing user accounts. Somehow the tool did not recognize this. Or rather: we did not find out what the tool was missing to be able to recognize it.

In the end, we closed our session with the given result and mixed feelings. We were annoyed we couldn't finish successfully, it was a real pity. We still enjoyed our session, however, and learned a bunch again. It looked easy in the beginning, but then we stumbled and left the session with lots of question marks, already thinking about how to continue. The route of going deeper and investigating more might lead us to the solution, or not any further at all. We won't know before we try it.

What went really smooth was our collaboration. We used the Pomodoros technique this time, breaking down our 90 minutes session into short intervals of 25 minutes, skipping the breaks. This way we had several checkpoints to decide how to proceed, making sure we always stayed aligned. Also, we instantly applied our own pairing style we defined in our last session. I shared my screen, mostly we worked together on it, sometimes we separated our focus to research simultaneously. We used our own shared Google document from the beginning this time to take notes we both could see.

Compared to our first session, I felt that collaboration went easier as we already had worked together, got to know each other a bit and had those basic working agreements to build on. Although this second session was not as successful as the first, we both shared the opinion that pairing itself is invaluable to generate ideas what the problem could be and what to try next to solve it. Pairing challenges our own understanding, creating a shared one. Pairing is the learning time we don't want to miss.

Friday, June 8, 2018

Testing Tour Stop #12: Pair Exploring App Development with João

Recently, I've been at a lot of stops on my testing tour. Here's the summary of another one, and there are more still to come! This time, I had the honor to pair with João Proença. João was one of those people who got inspired by Toyer's and my pact as learning partners at Agile Testing Days 2017. A few months later at European Testing Conference 2018 we had several long conversations and found that perfect mixture of similarities and differences in each other where both can learn a lot from. Toyer and I are really happy that he decided to join our extended pact group along with Dianë Xhymshiti! All in all, it's a pleasure to exchange knowledge with him, so I was really happy he decided to become part of my testing tour.

Preparation? Already done!

Now, normally it was me who prepared a system under test or even a topic to pair on, as most people didn't have a clear notion of what to tackle. This time, however, I did not have to do anything besides looking forward to the session. The reason was that João managed to enable us to have a look at one of his company's products: OutSystems' development environment. How great is that?! Here's part of the email he wrote in advance of our session.
What I'll be proposing we do tomorrow is a bit of paired-up exploratory testing over the "first experience" one has with the product my company, OutSystems, offers. The fact that you know little about it is great!
I'll give you a bit more detail tomorrow when we begin, but we will most likely be using the currently available features of the OutSystems platform, but also some new unreleased ones our teams have been working on!
A couple of teams here that work on product design and UI development are very interested on the outcome of our session and have asked me if we can record it, so that they can watch it later and collect feedback.
Would you be ok with us recording our session for our internal use (a video of our interaction with the platform)?
Recording? I even discussed that idea with Cassandra on our common stop, so yes I really wanted to give it a try. And all this to help people improve their product? Of course, that's exactly what I love to do!

Getting Our Session Started

OutSystems claims to enable you to "Build Enterprise-Grade Apps Fast". On their website you can find the following definition of their product.
OutSystems is a low-code platform that lets you visually develop your entire application, easily integrate with existing systems, and add your own custom code when you need it.
Crafted by engineers with an obsessive attention to detail, every aspect of our platform is designed to help you build and deliver better apps faster.
In the beginning of our session, João explained that OutSystems' development environment, service studio, has major release cycles of a few months just like other IDEs as Microsoft's Visual Studio or Apple's Xcode. The goal was to tackle the software from a newbie perspective. Product design people were really curious about the experience a new user would have, so João asked me to really speak my mind during testing. Sure thing!

We agreed to split the testing session into the following two steps.
  1. We do the "Build a Web App in 5 min" tutorial for the current version 10.
  2. We create a new app in the upcoming service studio version 11 which is currently under test and not yet released.
João does not work on the team developing the UI, so the whole topic was also new for him. Still, of course he's an advanced user already and knows his way around the software. Therefore, we decided that I would be the navigator for step one, and for step two we can switch roles as he also did not see anything about it yet.

Then it was about time. We started recording, and we started testing.

The Joy of Exploring a New Applications

Now, I won't share the video recording, and I won't share any details about the second step testing the beta version, trying to create a web app for a real use case. What I can share, however, are the highlights of the first step, doing the tutorial of the current version to get a first feeling for the IDE. Why? Because the great thing is, you can simply try it for yourself! Personal environments are completely free and available in the cloud as long as you keep them running. For our session, João had prepared everything up to the point that we started the five minutes tutorial to develop a new web app.

One of the major topics we came across all the time during testing was usability.
  • Issues with the tutorial itself. Even recognizing the tutorial as tutorial. Same navigation button in the tutorial sometimes referring to the help, sometimes to the next step. Not seeing helper arrows pointing to the area of interest. Not displaying these arrows when viewing the tutorial steps. Not having it obvious that you don't need Microsoft Excel. Confusing ordering of explanations what to do. Missing steps to take. Constant re-positioning of the tutorial dialog; at one point even misinterpreting this behavior as the next step, triggering me to execute it twice. Having not reached 100% when finishing the tutorial.
  • Inconsistent or misleading styling. Surprisingly different styling for the tutorial, the tutorial steps, as well as any other application dialog. A new application button that looks inactive and does not draw attention. Triangles used as navigation arrows which rather look like running an app.
  • And more issues. Superfluous scrolling not providing more options. A wizard step only providing one option, so why having it at all. Doing the same thing twice leading to two different results on purpose. Viewing your new web application first in the smartphone view. Unexpected field validation in the resulting demo application.
I talked a lot about the feelings I had regarding the application. As a new user, it's very important for me that I don't get lost but find my way through the first steps easily enough, that I get aha moments of what's happening so I learn, and that I don't get annoyed or frustrated by the software. At several points, however, I got those feelings. I had to learn the tutorial's way how to lead me through first, and then could successfully complete it.

I tried to really slip into the shoes of a new user. From a technical point of view I understand how things are built and why certain decisions had been made, but from a pure user point of view I would have asked many questions, so I raised them. Especially as I understood the tutorial trying to take the user by the hand and to make it as easy as possible for them to create a first app. Furthermore, new users will always compare the system at hand to applications they already know, using those as oracles for their expectations.

In our second step, we switched roles and learned about the new version with several things revised. We found again quite interesting things which I cannot disclose here, but João took them with him to forward them to the team.

All in all, all this is of course only part of the story, from a tester perspective. Trying to find those things that can still be improved. The product looked really nice and easy and I would love to dive deeper into it!

In Retrospect

It was a really nice experience to have everything so well-prepared by João - my thanks to him for that! It was awesome to explore yet another completely unknown application. It was fun to be an actual new user as a tester and imagining the perspective of a real new user at the same time who wants to start developing an app, trying to learn what they would learn. It was great we collaborated well and our results were really productive. It's absolutely awesome we have a video recording of our session! This way we could focus on learning and providing feedback. The downside, of course, was that I now had to go through nearly two hours of video material to be able to extract the goodies for this blog. For real-life sessions I would not skip note taking as we did so I would still get that quick overview of issues, questions, or further areas to explore about. Still, video recording can be invaluable there, too. It's proof of what actually happened! Awesome to reproduce and report bugs, or even to investigate them.
Sometimes there are bits of information that are crucial but you only acknowledge that afterwards when you had to go back. ~ João
When watching the recording, I saw lots of things I did not realize during testing, even though João drove for me and I could focus on observing and navigating. Exploring a video recording provides lots of feedback as well! About the application as well as about our own procedure when testing, our thoughts and ideas. Lots to learn from.

We set off exploring with certain goals in mind, and we reached our destinations. However, we did not structure too much how to get there. This is the sort of freedom I love when exploring, with every step we learn more about potential ways ahead of us and can decide which to take. We don't have to follow the one and only route strictly but can discover so much more on our way, even if it's not in our focus.

On a meta level, I noticed that with more practice it's getting indeed easier for me to pair with other people. It still helps me if I had met them already in real life. I feel the fear to look silly is getting less so I can focus more on the actual task at hand and freely express my thoughts. Sessions were constructive from the beginning, but were mostly covered by a layer of doubt about myself to deal with on top.

There's another thing that only came to my mind when writing this blog post: it felt awesome exploring an IDE. Again. I only realized now that I have way more experience in that area than I might have told even myself. As a user of IDEs like Eclipse or IntelliJ of course. But even more importantly, as a tester. I spent my first years as a tester testing an AI middleware for computer game developers. We developed our own engine, IDE and server, and we also integrated in Unity. I did lots of exploring back then already, I even wrote those kind of beginner tutorials myself! :D And I loved this awesome domain.

Finally, the very best thing: João said he took a lot of value out of the session himself. He can now bring the issues we found to the teams. It was great to hear that they do lots of usability tests and really care about the feedback, using it to improve the product!

Saturday, June 2, 2018

Testing Tour Stop #11: Pair Exploring Test Ideas with Viki

The first time I met Viktorija Manevska was back at Agile Testing Days 2015, the very first conference I've ever attended. Before the first conference day started, I joined the lean coffee session hosted by Lisa Crispin and Janet Gregory. Both Viki and I found ourselves at Lisa's table, sitting next to each other. This is where we started to exchange experiences. Since then, we met each other each year at the same lean coffee table on the first conference day! :D Viki helped me a lot by sharing her story as first-time conference speaker and providing feedback on my own first steps towards public speaking. We had several Skype calls over the last year which proved invaluable regarding knowledge exchange, just like all my sessions with Toyer Mamoojee. I really love how I have met up with more and more other awesome testers over the last months! I signed up for a virtual coffee to get to know other testers like Milanka Matic, Amber Race or Kasia Morawska. I really enjoyed the calls I had with Mirjana Kolarov. And I loved meeting Anna Chicu finally for the first time! All those meetings were invaluable, full of knowledge sharing, story telling and having a good time together.
Now, back to Viki. She just recently moved to Germany to work for a consultancy. Having worked in product companies before, her current project provided new opportunities as well as posed new challenges for her. We decided to focus our pair testing session on one of those, on a part of testing that might not come to mind as first: generating and challenging test ideas as input for exploratory testing sessions.

A Session of a Different Kind

This time, there was no software under test - instead, we tested and challenged ideas. Test ideas that Viki had already brainstormed and provided as input for our testing. The result? We discovered more ideas. We restructured current ideas. We removed duplicates. We challenged procedure and more. All with the goal to improve and make it better, like we testers usually do with everything that gets into our hands.

While brainstorming and discussing, we talked a lot about testing itself. Have you used tours as a method to explore a product? I normally use cheat sheets heavily to generate further test ideas, identify further risks, come up with further questions about things we haven't talked about yet. Cheat sheets like the infamous test heuristics cheat sheet by Elisabeth Hendrickson, among other great resources. This way, wee started talking about exploratory testing itself. How structured should it be? Sometimes I feel my own sessions would benefit from a bit more structure, but I'm still happy with starting out to look for risks in the area of A and ending up with discovering a lot in section S. However, exploring is not unstructured and seldom goes without a clear mission or note taking. What I really love is Elisabeth Hendrickson's definition of exploratory testing, as stated in her most awesome book Explore It!.
Simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next
In my opinion, too much structure or being too strict about it removes the part of adapting your way according to what you learned. Having the product show you the way, or better multiple ways, following them to explore areas of unknowns. At times I wonder whether Maaret Pyhäjärvi was thinking of that when saying that "the software as my external imagination speaks to me"; a notion I often pondered over.

Now, the system to brainstorm test ideas for was a typical untypical web application. Something known like signing up for a user account, something unknown due to the specific domain. We started with ideas around negative testing, UI testing, smoke testing, and workflow testing. Going forward, we produced a lot more ideas, asking so many questions about the product, and also about testing itself.
  • Would you consider different browser settings like turning JavaScript off as negative testing? It could be a normal use case for certain users.
  • What about high security settings or incognito modes? Some users just care more about privacy than others, still they don't mean any harm to your application.
  • What about popup blockers or ad blockers? They are quite frequently used.
  • How to best limit the input for a name field? This made me think of Gojko Adzic's recent book Humans vs Computers as well as his famous Chrome extension Bug Magnet, a really useful tool to quickly test out valid or invalid input values.
  • Or the Big List of Naughty Strings! I've known it for a long time but haven't used that one much yet. There's definitely more to it.
  • This brought up the topic of SQL injection, a common vulnerability to check for. In general, user data must not be at risk, so let's see if we can get login information, take over user sessions, or simply can view other user's unsecured pages.
  • What about tampering with system settings like your local time, could you trick the application to provide you better offers that are based on dates?
  • You send emails? Really important to test those properly then. In one of my previous teams, formatted emails were displayed very differently using different email clients. Are the parameters of the email template filled with the correct values depending on given conditions? Are potential images attached that need to be downloaded? Are the links correct? Could the email be considered as junk and filtered out? How long does it take to receive the email?
  • What happens if you bookmark the URL while being in the middle of a process to share it with someone else?
  • What about the browser back and also forward buttons, could they lead to unexpected, inconsistent, or simply unpleasant behavior?
  • What about consistency regarding styling? Is the UI accessible, e.g. for color-blind people?
  • Which kind of browsers, browser versions, operating systems/devices, screen resolutions do we really want to support?
  • Would you have manual smoke tests? Why not automate those sanity checks instead of repeating them over and over again?
  • What about alternative paths through the application, mixing up the order in which you can do things or skipping steps - would you still reach your goal as a user? Should you be able to do so or is there a prerequisite for critical things?
  • Have you ever tried the nightmare headline game to discover what would be the worst to go wrong?
  • Oh and: although we received so many emails about it lately - have you really considered GDPR compliance?

Looking Back

Both of us loved the session! Viki said she was really satisfied with the outcome. It was great to hear a different opinion, especially from another fresh tester point of view coming from the outside. Sharing experiences, experiments and stories provided lots of value for both of us. I was really glad my input helped her further, especially regarding testing emails, SQL injection and GDPR. I personally really enjoyed to have a completely different pair testing session, exploring mind maps, challenging ideas and thoughts, discussing structures, procedures, and strategies. Exchanging information about different positions, roles, expectations, and ways of collaboration - which is so important as well when testing. Talking about exploratory testing, what we understand, what we do, how we give feedback and create transparency, how we provide value. Just as Viki said, quoted freely: "Name it like you want, but it's important to deliver value." I so much agree.