Saturday, July 21, 2018

Testing Tour Stop #16: Pair Exploring an API with Thomas

Yet another stop on my testing tour? Yet another stop on my testing tour! Yesterday I learned together with Thomas Rinke. We shortly met each other at Agile Testing Days 2017 where we had a great conversation over lunch. Like many of the other people I paired with, Thomas is really engaged in the community as well, something I really appreciate. He is on the conference board for the German Testing Day, and also actively helping his colleagues grow. He already did what I wanted to do with my own company's testing community for a long time: watching great conference talks and discussing them together. Still on my "to try" list!


The Preparation Phase

About a week before our session, we started brainstorming and aligning ourselves on our topic. Thomas was up for exploration as well as automation. I proposed to go for an exploratory testing session, pairing the strong-style way. Thomas shared he never experienced it himself but was intrigued to give it a try. He had come across Alan Richardson announcing Thingifier, his new demo app for API testing. Awesome! I wanted to have a look at that myself already, so we chose it as our system under test. We directly went for version 1.1, however, as the previous one did not start up successfully on our computers.

In case you haven't explored an API yet, I recommend Maaret's fantastic article Exploratory Testing an API. For our session we agreed to use Postman as our tool of choice and to run the app on my computer, sharing screen control. Have I mentioned already how much I love Zoom for not only offering great video quality and recording functionality, but also free control sharing that's easy to use and actually works?

The Session Introduction

We started out setting everything up and getting any open questions out of the way. Interestingly, Thomas shared with me that he was nervous and excited, thinking "well, what could go wrong, it's going to be great". Especially testing with "strangers" was something extraordinary, and we didn't know each other so well yet. The thing was: I was nervous, too. I always am for my pair testing sessions. Fear is even one of my reasons that I started the whole testing tour. Doing this is still scary for me. However, I already see that it is continuously getting better over time. "If it's scary, do it more often" really applies for me here.

Thomas also shared he prepared for the session upfront. For example, he read Llewellyn Falco's blog post about strong-style pairing and listened to Woody Zuill's podcast episode about mob programming on Agile Uprising. Both really recommended!

I learned Thomas was running Windows as operating system, so I advised him to rather not use shortcuts as they are mapped differently on macOS and this doesn't work easily when sharing control, as I had learned in my session with Pranav. Due to the preparation done beforehand, everything could be quickly clarified so we could activate the mob timer and start testing together.

The Testing Part

Thingifier is described as a "hard coded model of a small TODO application". Our first goal was to get an overview of the capabilities of the application and understand its model. We started out using the given documentation and sent first requests to see how the application worked and behaved. Interestingly, later on I saw that the app's release note for version 1.1 said "When the app is running, visit http://localhost:4567/documentation in a browser to see the docs." This was in contradiction to the actual documentation that could be found directly on "http://localhost:4567/".

Thomas started out as navigator, but when it came to him to take over keyboard control as driver, we found that Postman and Zoom screen control sharing did not work well together. He could do right-click actions with his mouse, but not left-click. We discovered that it worked in Chrome, however, and when going back to Postman it now worked here as well. Would be an interesting topic to explore itself!

When getting to know Thingifier's Rest API, we came across the following findings and starting points for conversations.
  • The documentation spoke of todos and tasks. On first scanning over the documentation, it looked like these terms had been used for the same thing, so it seemed like an inconsistency. However, on second read it became clear that you had standalone todos on the one hand, and projects that have todos as so called "tasks" assigned on the other. Nonetheless, this naming differentiation caused us to stumble. Later I found that when using a category guid as resource instead of a todo guid, it's successfully accepted as task, but then calling the task collection returned a 400 (Bad Request).
  • When adding a task to a project, the request returned a 201 (Created). Repeating the same request did not add the task a second time which was good, but the request still returned 201 (Created). We considered that not critical but also not completely clean, we would have rather expected a message that the relation already existed.
  • We stumbled a bit regarding the naming of resources. For collections the singular form was used (e.g. "/todo"), for sub-collections of an element the plural form (e.g. "/todo/:guid/categories"). We wondered whether there was a related naming convention for Rest APIs. Intuitively we would have used the plural form for all kind of collections, as the request was returning an array of all items. For example "/todo" returned an array of "todos", so we would rather have named it "/todos". Further research showed that it indeed seems to be best practice to use the plural form for collections.
  • For testing purposes we would have loved to see existing relations in the request responses, e.g. when querying a project to see its assigned tasks and categories.
  • Creating a todo created it with a random guid, and it was marked as not completed by default; just as expected.
  • When checking all given relationships between entities, we found them to be inconsistent. Categories and todos were implemented in both directions, so you could see all todos of a category as well as all categories of a todo. However, for example, you could see all projects of a category, but not all categories of a project.
  • We had already tried three of the four offered relationships, and thought everything seemed okay. To complete our understanding we tried the fourth as well - and it failed. "/todo/:guid/categories" resulted in a 400 (Bad Request). Fun fact number one: It's so often the last thing we try that unveils an issue. Fun fact number two: I could not reproduce this issue anymore when writing this blog post; it seems we missed something here.
  • When querying a collection, we would have liked to directly see in the response how many elements were returned, at least to help testing.
  • When requesting a resource that did not exist, we received an error message as expected. Its wording was quite technical but expressive.
  • We deleted all todo items and then queried for the collection of all todos. The request returned only an empty response without value ( i.e. {} ). We would have preferred to receive an empty array of the queried entity instead to keep the response expressive (i.e. {"todos": []} ).
After we had gotten more familiar with the application, we decided that it was time to go deeper on a certain part of the API. We agreed to have a more detailed look at creating todos as we perceived it as core part while still being small and graspable.
  • Sending a request with an empty body returned a 400 (Bad Request), as expected.
  • When providing the framing brackets {} as body we received the error message that the field "title" was mandatory.
  • When providing a body with the title field set to an empty string, we received the error message that the title must not be empty. Providing a blank resulted in the same message.
  • Providing a dot (.) as title worked, just as the Polish special character ł. Previously I learned those additional Polish characters are not included in the basic Latin character set so they can can cause problems e.g. when storing values in a database.
  • Providing a double quote (") returned a MalformedJsonException. We tried to escape the character, using a backslash (\), but it returned "Expected ',' instead of '"'". We wondered how to escape a character in a REST API request body but left that for future investigation.
  • We provided a quite long title of more than 256 characters. The request returned a 201 (Created) as expected. However, when then querying for all todos, the request only returned "Expected ',' instead of '"'". We found that this was not related to the long value provided, but due to our previous tries to escape characters. We indeed had to restart the app to reset the given test data.
  • We switched to the guid field. We found that if the guid is explicitly specified on todo creation, the given one is taken. In case the guid already existed, the todo is not created again but the existing todo is updated. We discovered that there seemed to be no validation regarding the length of the guid. Later I found you could even provide an empty string as guid which made it impossible to request the specific todo.
  • According to documentation, both POST and PUT methods were offered. However, their purpose felt mixed up. Both should create todos in case they did not exist yet, and both should update an already existing todo. However, I found that my gut feeling is mixed up here as well, as I thought in terms of HTTP, however REST only demands a uniform interface.

The Retrospective

As for all test sessions, we left some time for a short retrospective at the end to reflect how it went and what could be improved. Thomas started with sharing his thoughts, and by doing so provided wonderful feedback! Here's what I understood.
  • It was a lot of fun and overall a great thing.
  • He had seen on Twitter that I invited basically the whole world to pair test with me and thought that was a really cool thing. He told himself he could not lose anything there and wanted to join in.
  • The communication beforehand was great. He really liked that I reached out in time upfront, about one week before our session. That we clarified the technical setup, aligned on the topic to pair on, agreed on the style of pairing. That both of us could contribute when choosing our topic and how we wanted to collaborate.
  • He learned something during the session which was great. He had the feeling that it was also worth it for me, not that I thought "oh no, Postman, exploring an API, how boring".
  • He found that I was open, friendly, and welcoming. He really liked that I was looking forward to our session myself.
  • The session offered lots of first-timers for him. It was his first time of using Postman, first time of having a REST API, first time of trying the strong style way of pairing, first time exploring an API. He considered these quite a lot of firsts and therefore felt triggered to prepare for the session which he thought was good. He felt a certain sense of commitment and bindingness to do so. He wanted to explore an API for so long already, and our session triggered him to really do it now.
  • Thomas said the execution of the session and our alignment were good as well. Strong-style pairing was really great, too, he had a good feeling about it. He shared that he felt the trust given and had trust in me himself. He referred to Llewellyn's blog post which emphasized the importance of trust as well.
  • What was not completely perfect from his point of view was that we worked on different operating systems. Thomas felt he was slowed down as he always had to use the GUI for operations like copy and paste. This was not optimal as he felt that I was so much faster when I was driving. He said that the upfront warning about the mismatched shortcuts was great, and in the end this was okay for him, but he still felt I was always faster.
  • Thomas valued the results of our session. We found some behavior working as expected, some surprising things, and some classics.
First of all I had to thank him for his great, structured, constructive feedback. It is so helpful to receive detailed feedback also on the positive things, showing which things are going in the right direction besides those that could be improved. Really appreciated!

Besides that I shared that for me nothing is too basic to pair on. I learned that I always learn something, especially with different partners who bring different perspectives. Sometimes it's great to see a different approach, but sometimes it's also great to see when we share approaches or think in similar directions. I started out on my testing tour because I did not know where I stood regarding my own testing skills and knowledge, if what I was doing made sense at all.

We both shared the approach to first start with the base frame and then dive into the details. As Thomas added, there's also the other approach of going fully destructive at once. However, he normally likes to see the happy path first, how the application was intended to behave. He made better experience to first provide feedback around that to developers instead of instantly coming up with rare strange cases causing the implementation to fail.

The whole communication with Thomas upfront and throughout the session was so easy, I really enjoyed it. Furthermore, I really appreciated that Thomas created a safe environment for both of us from the very start of our session. He shared right at the beginning that he was nervous and excited, which made it really safe for me to share my thoughts and feelings with him and put everything on the table. Neither of us had a problem to call something out during the session, as this trust had been established beforehand already. 

Thomas shared that he loves to experiment and try new things in order to see which of them work. He also wanted to give mob testing a try, especially after having experienced strong-style pair testing now. With the strong-style approach you don't have any chance to lose focus, it's simply awesome to have two people or even the whole team fully concentrating on the task at hand.

The Inspiration

In the very end, Thomas gave me some feedback on my whole tour and what I am doing. He said it's a real inspiration for him to also get out of his comfort zone. For example he had recently decided to invest the time to join the Tuesday Night Testing and really valued the event, having great conversations and experience exchange. Honestly, this is so great to hear as it's exactly what I am trying to do: show people how to safely get our of their comfort zones and how valuable this can be for their personal development as well as for the rest of us learning from them.
Thomas also presented the example of Kim Knup, who just decided to tackle Alan Richardson's note taking experiment and made this decision a public commitment. This created the same sort of bindingness. Thomas shared he really wanted to use some of his time focusing on learning. We both encountered people who do not take any time to learn, not even half an hour. Many seem not to belief they are entitled to use their time for learning, "away from their tasks", a few might also not want to grow (which is a pity). Thomas said that sometimes we should do the things that we think are great fun, and sometimes we should do things that are on the edge of our area of responsibility. I so much agree.

I am already looking forward to meeting Thomas in real life again at the next Agile Testing Days end of this year. We are both going to join the Web Application Security tutorial by Dan Billing. Also, Thomas plans to join Toyer's and my workshop of Finding a Learning Partner in the Testing Community and hopes to find an accountability partner for himself there! So awesome. I really hope he will, alongside many more.

Wednesday, July 18, 2018

Testing Tour Stop #15: Pair Evolving a Test Strategy with Toyer

The testing tour continues. Today I had the honor to pair test with Toyer Mamoojee. Since end of 2016, when we agreed on our first pact to try ourselves as conference speakers, we are having a call once every two weeks. We talk about all things testing, exchange experience, trigger thoughts, provide feedback and support. Still, we haven't ever tested a product hands-on together. You can imagine how happy I was when Toyer took the chance and scheduled a pair testing session with me on my testing tour!

What topic to pair on?

That's one of the common first questions that come up after scheduling a session. In this case, Toyer said he would love to do something from scratch, gather test ideas together, align our thinking. We know each other and our viewpoints quite well, but we haven't yet practiced testing together. He wanted to see how we really go about testing certain things, ask questions and see each other's thought processes when actually testing.

Based on his input, I thought let's tackle an application we both don't know, explore it and come up with a very first test strategy based on the gathered knowledge. As I knew that Toyer discovered mind maps for his work some time ago and learned to love them for many purposes, I thought that this could be our way to document our very first strategy draft, keeping it lightweight, easily editable, visual.

Having a list of potential applications to tackle at hand, I reached out to Toyer and asked whether he wanted to learn more details about my idea before our session, or rather not. He answered "I'm tempted to say share more information.. but I would like to be surprised too, as I want to see how I can tackle something without preparing". So he chose the latter option to which I can really relate. Over the last years I am continuously learning to tackle things without over-thinking; and I'm not done with learning this yet.

Evolving a Strategy While Exploring

At the beginning of our session I presented my topic idea to come up with a test strategy for a new product, and Toyer agreed to go for it. Lucky me, otherwise we both would have had to cope with an unprepared situation! ;-) So I provided different options as our system under test of which Toyer chose Blender, an open source 3D creation tool. I had some rare encounters with this application back at my first company when we developed an AI middleware for game developers, but had hardly touched it ever since. Toyer thought it looked really promising as we normally don't get to test these kinds of applications.

Toyer shared that first of all, he would ask which kind of need this application wants to fulfill and do related research upfront. For the limited time box of our session, however, we decided to skip this and explore ahead. Toyer accepted my suggestion to draft our strategy in a mind map, so we created it and continuously grew it according to our findings. He also agreed to do strong-style pairing while exploring, so I started up my favorite mob timer, set the rotation to four minutes, and off we went. It quickly became clear that we knew each other well. Collaboration was easy and communication fluent. We could fully focus on exploring Blender from a high level point of view, trying to grasp its purpose and main capabilities, identifying limitations and potential risks. We were actually doing a recon session, just as Elisabeth Hendrickson describes in her awesome book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing.

Throughout our session we gathered lots and lots of findings and discoveries, adding more and more important points to our test strategy.
  • Learnability. The application is not intuitive at all. It's a real expert tool. Still, everybody is a first-time user once, and even if you know the domain the user experience this product offers is not that great.
  • Functional scope. The more we explored, the more functionality we discovered. The whole tool seems really powerful, but again, is not easy to understand.
  • Screen resolution. The GUI is cluttered with many UI elements, sidebars, popups and more. On our laptop screen that was already a challenge, and it will still be one on larger screens.
  • Usability.
    • Menus, popups and tooltips looked very similar which made it hard to distinguish the purpose of each.
    • Feedback on actions was often missing or confusing.
    • Some sidebars displayed content related to views we previously visited, not getting updated with information of the current view. This way, they sometimes obscured needed information.
  • Consistency.
    • Some actions worked in one area but not in the same way in another.
    • Some sidebars were named x but the header label said y.
    • Some delete actions asked for confirmation, others just instantly deleted the item.
  • Portability. We tested Blender on macOS. The product is also offered for Windows and Linux. At several points we found strange unexpected behavior and made the assumption that it might have been due to porting issues for the macOS version. For some points, I could even confirm that assumption when writing this blog post and checking the Windows version of Blender.
  • Maintainability and reusability. The GUI offered many sidebars, popups and views that shared similar layouts and menus. We noted to investigate whether they were duplicated or re-used components.
  • Robustness. We encountered error messages on invalid input that was not caught or prevented.
  • Automatability and testability. The application offers a Python API. We found Python commands offered in tooltips, the API reference in the help menu and an integrated Python console. The console behaved differently than we knew it form other terminals, but still it was very interesting that you could automate operations; which would also increase the product's testability.
  • Discoverability and documentation. The help menu offered an operator cheat sheet; clicking on it triggered a temporary message to look at the OperatorList.txt which we could not find. Only later I learned that we had not come across the text editor where you could open named text file. What a hidden support feature. Also, we found the linked release notes page to be empty. We didn't dive deeper into the manual, but all available documentation would have to be tested as well, especially for an expert tool like this.
All in all, we made our way through many parts of the application. We made quite some assumptions. And we found we still haven't seen a lot yet. In the end, we didn't have a final test strategy, but a good starting point to be iterated over.

Time to Reflect

We covered a lot in the limited time. We gathered lots of insights, ideas, assumptions to verify. We tested a product in a domain we both don't know much about, being a desktop application instead of our usual web applications. We tried to gather information and keep a holistic view on things, not diving deep yet, focusing on uncovering the different aspects to test for such kind of a tool. All the way mapping out our world and the points to tackle in our test strategy. As we learned more, our strategy evolved. We didn't reach an end by far. If this would be our product to test, we would iterate over it while learning more.

The unknown domain had its own charm. We approached the product as a black box, not looking under the hood in the first place. We brought lots of testing knowledge, but quickly saw we lacked the domain knowledge. Toyer made an important point here: when hiring a tester for such kind of a product, it would be best to look for someone who was already exposed to these kind of tools or related areas of expertise. We could still provide lots of value and ask questions which might go unasked otherwise, but we would quickly pair up with a product person or business analyst to model the product from domain point of view. And also sit with developers to model the product's architecture.

Pairing up helped once again a lot. To see different things at the same time, by looking at different parts of the screen. To grow the mind map way faster than any of us would have done on their own. And to include different thoughts and viewpoints.

Enrich Your Experience

This was the fifteenth stop on my testing tour so far. In the beginning I had only planned ten sessions, one per month from beginning of this year until end of October. Although I already overachieved my initial goal, each further session enriched my personal experience and brought me in contact with different approaches to learn from; all that while practicing my skills hands-on. Right now, I am reflecting on my whole journey so far as I am crafting a talk about my experiences on this tour which I have the honor to give at CAST and SwanseaCon this year. And just while doing so, another tour stop was scheduled with me, further people indicated interest to pair up or listen to my lessons, and I'm having further test sessions with awesome people. I'm curious where else this will lead me. What a wonderful time.

Tuesday, July 3, 2018

Testing Tour Stop #14: Pair Evaluating a Visual Regression Testing Framework with Mirjana

Yesterday I enjoyed yet another stop on my testing tour. This time I was joined by Mirjana Andovska. In the beginning of our session she shared that she was the lone tester at her company, just as I have been for a long time. She saw my tweets about my testing tour and knew she had to try that as well, so she signed up. How awesome is that?!

When asking what she would like to pair on, Mirjana answered automation. I found out that she has a lot more experience in automation than I have, so I was really glad that she wanted to pair with me. She was especially interested in visual regression testing, mutation testing, and automated security testing. All great areas of expertise I would love to learn more about! In the end we decided to go for visual regression testing. Mirjana was so nice to even prepare playground projects so we could start our session right away and get to know the available frameworks better.

Learning While Walking Through

As Mirjana kindly took over preparation, our first step was to walk through the available projects and see what's already there and what she learned so far. She had revived an older playground project of hers using the Galen Framework, as well as created a new project to try out Gemini as visual regression testing tool. Unfortunately, it wasn't as easy as expected to get Gemini running on her computer, as it shows a dependency towards node-gyp which has a known issue on her operating system Windows 10 requiring permission to install further software.

We decided to go with the Galen playground project first and learn more about this framework before maybe trying to get Gemini running on my laptop running macOS. But first of all: why Galen and Gemini? Mirjana referred to a quite recent overview of the different frameworks available for visual regression testing. Based on her team's needs and what she read about the tools, she found that Galen and Gemini looked most interesting to check out first.

The playground project Mirjana provided was based on the getting started documentation and the first project tutorial for the Galen Framework. She had already implemented a first test for a sample website. We only had to run Selenium locally as standalone server, execute the tests and see how it was working. Just by Mirjana walking me through the playground project, we both learned more about how the framework works, while extending our knowledge on the way.

Which Framework to Chose?

The main purpose of our session was to learn more about the Galen framework and evaluate it. We wanted to discover pros and cons, as well as learn the answers to the following questions.
  • How easy is it to set it up and configure?
  • How easy is it to write a test?
  • How easy is it to maintain the tests?
  • How easily can you understand the resulting reports? Can you quickly see what is wrong so you can fix it based on this information?
With every step we found out more. Here's what we learned.
  • The spec files included CSS locators for objects as well as the test specification. We noted for later to find out whether we could also have the locators separate from the test specification.
  • Galen takes screenshots of the whole page as well as of the single elements to be located. Using images of only part of the page for comparison was pretty nice. However, when looking at the different menu items of a sample navigation bar, we found that the images were cut at different places, sometimes even cutting the menu item text. We felt this was quite strange, so we added it to our list of things to be investigated later.
  • The test report can be defined. We tried the html report, which included a heat map visualizing what the tests covered; pretty nice. However, the report captured the console output only from the framework, but not from the application itself which would make it easier to see the information needed to reproduce a bug.
  • The test run didn't close the browser in the end, so we noted that we would need to take care of this ourselves.
  • We wondered how to add more specs in one test runner file. We postponed this question for later investigation.
  • We learned that we can specify tests in JavaScript as well as in other languages like Java.
  • We saw options to test for desktop or mobile. We decided to not dive deeper here for now.
  • We found that it's really easy to run the tests not only locally but also on different servers, or on services like BrowserStack or similar.
  • We ran the tests on BrowserStack, and the tests failed due to differences. At first we assumed that the differences were due to the different operating systems Windows 7 and 10 as the Chrome version was the same. However, when looking at the compared images, we saw that the expected version showed a scrollbar where the actual image did not.
  • This led to the question how the expected comparison image was created. Maybe on first run? Maybe when running the "dump" command we found? Or did it take the expected image from the last test run?
  • We had a deeper look at the command to create a page dump.
    • The Galen documentation told us that "With a page dump you can store information about all your test objects on the page together with image samples."
    • We ran the command and waited. With time passing, we began to wonder whether it was still running or not, especially as it didn't provide any log output anymore. We decided to let it run a bit more and found that it took about three minutes.
    • We learned that the dump command, unlike the test command, did indeed close the Chrome browser after it ran through. The dumping processes generated lots of files: html, json, png, js, css files and more.
    • We discovered that the dump report gave us spec suggestions, but only if we selected two or more defined areas like the viewport and a navigation bar element. It seemed the provided suggestions should always refer to how an area related to another one.
    • Mirjana thought this would come in handy when a developer missed to align all navigation elements. I added that we could use this feature to explore the page. As the tests are quite slow in nature, we might only automate a part of them as a smoke test and explore around that manually.
    • If you'd like to have a look yourself, here's the official example page of a dump report, ready to be played with: http://galenframework.com/public/pagedump-example/page.html
  • Checking out the official documentation, we discovered the "check" command to run a single page test.
    • Here we learned that there are also options to create a TestNG, JSON or JUnit report.
    • The html report resulting from the check command showed a different structure than when executing "test", which we found interesting.
  • We still had not seen how the reference images were really created and wanted to test our assumptions. Sometimes you need to recheck the basics also at a later stage.
    • Documentation told us that the dump creates images that should be used for comparison.
    • When experimenting, we found that for "check" the stored reference images were actually used for comparison. However, when running the spec files as "test", it seemed to take the image of the last run as reference. Might this be a bug? It's always interesting to check out the reported issues when evaluating a framework.
At the end of our session, I asked Mirjana about the pros and cons she sees regarding the Galen Framework from what we learned so far.

LikedSo-soDidn't like
  • Easy to change configuration and run parameters, providing the flexibility to run the tests anywhere and in different combinations
  • Specifications for objects
  • Suggested specs feature; she shared that I opened her eyes for how to use it for exploring as well
  • Java or other options to write tests; depending on qualification or the type of project it's good to have the freedom to chose
  • Easy to run on Jenkins
  • Easy framework setup
  • Really nice documentation
  • A little bit confusing tests, maybe related to not being used to JavaScript too much
  • JavaScript files as additional layer of configuration besides the Galen config, the run command and the spec
  • Structure of the tests; we kind of tackled them but need to read more as we would also need to maintain them
  • You have to be careful with indentation when writing tests as you get an exception if it differed from expectation; especially if you use another editor for fast editing
  • Different reporting depending on the command used

All in all, Mirjana shared that from her perspective much depends on the purpose of these tests in the first place. Each test should give us an answer to a question, manual as well as automated. Sometimes it's not a yes or no. Developers might not be used to that, but reports can give us much more information than true or false, like a percentage of how much the result differs from the expectation.

As shared earlier, Mirjana had also set up a Gemini playground project. However, the test project failed to run due to missing dependencies which need to be installed what is not easy if you lack the required permissions. Also, it was hard to search and find this kind of information. She allocated the same timebox to explore both frameworks, but she didn't get as far with Gemini as she did with Galen.

In her experience, you only have a limited timebox like one day to try things out. You don't have infinite time or budget or resources. This is usually an important parameter when evaluating frameworks. However, it also depends on the skills of the one who tries them. This means, if you don't have the most knowledgeable person do it, it's a trade-off. I think that this is a puzzle you cannot solve as you only discover what kind skills, knowledge, experience you really need while you're already at it, not before. Mirjana gave the valuable advice that when you want to evaluate a tool, you need to have a familiar environment as system under test. At best it's a real project which is very simple so you can write simple tests, otherwise you will lose time inspecting the elements and learn about the application.

The Value of Learning Together

In the middle of the session Mirjana asked me: "Did you check the time? Because I'm having fun! :D" In the end, we spent 150 minutes testing together instead of the originally allocated 90 minutes.

I found the session really interesting and valuable. Mirjana had it really well prepared which I really appreciate a lot! So far I had only watched demos of visual regression testing but never saw it running for an actual product. By pairing up, I now had this opportunity and I'm thankful for it. In my experience I get way farther in less time this way than when I'm on my own.

Mirjana shared that in between her daily work she looks for ways to make her life easier. She tries to experiment, see what is new, what we can do more. Visual regression testing was one of the topics on her todo list. She had started the Galen playground project some time ago and realized by coming back to it now that she had grown a lot and learned a lot more since then. She hasn't seen the actual use of such a framework yet either, and also setting up a JavaScript project is not her everyday work. By doing it together I gained more insights about visual regression testing from her and could give more ideas back to her. Now we both have a starting point. And that's awesome.