Sunday, October 6, 2019

TestBash Manchester 2019 - About Learning, Daring and Enjoying

Looking back at the last week of TestBash Manchester 2019, I have to say this was an absolutely amazing week! Thoroughly enjoyed this experience. Brace yourself for quite a lengthy report. ;-)

Arriving in Great Company

Already when boarding my plane to Manchester there was the first nice surprise. Sven Schirmer was going to TestBash as well! The conferring part of conferences officially started for me then.

After arriving at the hotel, we joined the first meetup preceding the workshop days. I really enjoy meeting a few people already before the official program starts. This gets me into the mood and it's not as overwhelming as "meeting" everyone at once.

Together with Sven, Richard Bradshaw, Mark Winteringham, Maaike Brinkhof, Gรถran KeroRick Scott and several more we had a great dinner and lovely conversations. I felt emotionally prepared for the workshop day.

Learning: Workshop Day

I really enjoy workshops for learning, especially when they are hands-on and interactive. This time, I could finally catch a workshop I wanted to join for some time now: "Using Dependency Mapping To Enhance Testing Techniques" by Melissa Eadon. It was great! Exactly met my expectations. Mel emphasized how dependencies are a huge risk in modern development and how everything is about communication. In working groups, each of us drew a dependency map of their team, product or a problem they had. The rest asked questions for clarifications, trying to identify where the problems are and what could be done about them. As simple as it may sound, this visualization exercise was extremely powerful. It helped to get the thoughts and knowledge we have from our brains onto paper so we could discuss about it together. It triggered us to describe the situation while drawing, and the resulting questions caused interesting insights. For example, I realized what part of our product I had nearly forgotten to draw at all; and indeed, this is a part that's not well covered with testing. Another example: by casually answering a question I realized that I had neglected testing one of the most crucial parts of our application! The identified risk areas are a great input to focus further exploratory testing on. Besides that, this exercise would be very interesting to do with lots of people in my team and also across teams. Everyone has a different mental model and this makes it visible, therefore serving as discussion base. As a plus: it's also a relationship-building exercise!

In the afternoon I had the chance to see Emily Bache in action in her workshop "Getting High Coverage Regression Tests Quickly". She introduced us to the concept of approval testing and its characteristics. How you can still do test-driven development, how to handle test failures, how code coverage and mutation testing can guide us. This approach especially comes in handy when it comes to legacy code, yet is also convenient for new projects. The best part of the workshop: Emily provided us four practice projects to work on, in the language of our choosing. Trying out approval testing and covering existing code with tests. One of the persons at my table could not get their IDE setup to work for approval tests, so we decided to pair up. Even better: he was familiar with strong-style pairing and showed great communication skills, so it was a real pleasure to pair up and solve the challenges together. It's just a lot more fun this way! :)

For the evening, I had missed the opportunity to sign up for another meetup, hosted at the BBC. So I ended up with plan B (which should have been plan A in the first place): dinner with Sven, Maaike, Gรถran and Emily! I'm so much enjoying spending valuable time with my community peers.

After dinner, we decided to go for a drink to slowly end the day. And suddenly, to my huge surprise, Patrick Prill was standing in the room! He is the one I test all my talks with at a local meetup. Same about two weeks ago. Yet somehow I haven't asked him if he would also be at TestBash Manchester - I should have! This was an absolutely nice surprise. On the one hand because I really appreciate him as one of the kindest and most insightful human beings on earth. And on the other hand because of my talk. Granted, his presence also made me a tad more nervous, and yet I knew this would be the chance to learn whether I managed to improve my talk. What a nice surprise!

Daring: Conference Day

Then the time had come. The main conference started. Our compรจre: the great Leigh Rathbone! Once more I sketchnoted most of the talks - besides the one directly before mine. Such a pity, I would have loved to learn from Areti Panou's story! Yet I knew I wouldn't be able to focus. I did so many conference sessions already, and yet I can be certain I will be super nervous just before, and super distracted right after my own session (so I didn't catch the 99 second talks either). I learned not to beat myself up because of that anymore.
This time I went last - a great honor, yet my nerves were on edge and showing. Also, this time it was not only a talk for me - there was an even more daring part included as well. On the one hand I shared my lessons learned on my #CodeConfident journey, and on the other hand I proved my increased confidence live on stage with my very first live coding demo! Small and short, and yet really extending my comfort zone. The demo gods were kind to me and all went well, so next time I'm ready for more! For here, I'll let the tweets speak for themselves.
After the conference we all came together for a meetup at a nearby location. Great food, even more great conversations - yet it was extremely noisy and I felt drained. So, a shorter evening for once; the next day another conference day waited for me! Time to get some rest.

Enjoying: Test.bash();

The next day it was time for the second edition of Test.bash(); overall, and my first one. It can be considered a more technical and hands-on focused version of TestBash - and I really enjoyed it. Lots of great talks all over the place, lots of live demos, too. Hosted by the wonderfully energetic Gwen Diagram! I got out of the day with more knowledge and even feeling refreshed.
Right after a great conference day, we all continued with a meetup hosted directly at our venue, The Lowry. Not only that, it was indeed inside its gallery! I love art, so this was a special treat. I thoroughly enjoyed the opportunity to have both great conversations as well as some quiet contemplating time when looking at the great paintings and sketches by this artist from Manchester. Simply awesome. I loved this.

And once more: A great dinner with great friends. Elizabeth Zagroba, Joep Schuurkes, Patrick, Sven, and Geert van de Lisdonk. Thank you for a great time

Open Space

Only one more day to go before TestBash Manchester was finally over. I really looked forward to the open space day, hoping I still had enough energy. Lots of great topics made this easy, I learned a bunch.
  • "Testing without touching" by Joep Schuurkes and Elizabeth Zagroba. What a great session! Our group generated lots of great testing ideas and assumptions to verify, just by looking at the first page of an application. Great exercise, looking forward to taking this back with me to work.
  • "Threat Modelling" by Saskia Coplans & Jay Harris. Always awesome to learn from great security people. This time we threat modeled a service of our choosing to learn which requirements to fulfill to mitigate those threats.
  • "Motivation & Productivity" by Rick Scott. Great exchange about all things productivity hacks and self-motivation. It's not easy to get things done, and everyone is different so we need to learn what works for us.
  • "Accessibility Quiz" by Ady Stokes. This was super insightful! Ady handed out a page full of UI examples illustrating different types of accessibility issues. Great way to learn more how we can include all people and at the same time make the lives of everyone easier!
  • "Tester Growth" by Melissa Eadon. Mel asked all of us three questions: Where do you want to go? Where do you come from? How did you change? Great opportunity to reflect on our own situation as well as listen to the experiences and wishes of my peers.
  • "Practice Mob" session by me, additionally initiated by Joep Schuurkes and Elizabeth Zagroba (who mob with their testing community once a week). We wanted to work on something hands-on, practicing together - using the opportunity of having your peers in one place. Most people were simply interested to experience a mob for the first time, so we went for Joep's idea: let's extend MobTime, my (so far) favorite mob timer! Its drawback: an annoying alarm tone. On our endeavor to change it, we faced several setup issues as the project had not been maintained anymore for longer. Still, the session was great, we made progress, and most of all: so much knowledge was shared within just two hours of mobbing. Loved it!
How to end this evening best? Of course with a nice dinner and some goodbye drinks, with Mel, Joep, Elizabeth, Rick and Geert.

Now this leaves me only with one more thing to say: a huge THANK YOU to organizers and volunteers! You did an amazing job. This was my favorite TestBash so far, and I'll remember it dearly.

Sunday, September 22, 2019

#CodeConfident: Journey - Part 3

The preparations for my upcoming talks at TestBash Manchester and Agile Testing Days kept me pretty busy the last months. Despite them needing lots of attention, I made some further progress on my Journey application to be shared here as part of my coding journal.

September 1

  • compiled todo list for Journey as preparation for next weeks and next pairing session: what to do before my TestBash Manchester talk about my #CodeConfident challenge, and what I could do any time
  • I'm still considering using the GitHub project board feature to make ideas transparent and keep track of them in the future; for now I postponed the decision to a later point when I have more capacity for it again

September 2

  • pairing session with Michael Kutz
  • before we got started we had a great chat about exploratory testing, quality indicators, giving workshops, and the difficulties of switching between developer and tester mindsets :)
  • I explained the project background, what got generated by JHipster and what not, as well as the general purpose of the app and the open todos that I see as of now
  • Michael chose to help me use Lombok to get rid of some boilerplate code (we also use Lombok at work)
  • Michael explained that Lombok is executed before Java, Java ignores the annotations
  • first we checked that I had actually already installed the Lombok plugin for IntelliJ
  • we added Lombok dependencies to the gradle build script
  • added the Lombok @ToString annotation to the Challenge class and removed the related method; in the structure we could now see that the functionality still got generated as virtual method by Lombok
  • to see if things are working out we created a minimal test for this method; it's questionable whether to keep it or not - on the one hand we trust Lombok, on the other hand sometimes maybe not
  • learned a new shortcut to create a test: press Alt+Enter on the class name; for Java it's good practice to keep the test in the same package structure to have access to protected methods; using this shortcut the test will automatically be created in the same structure in the test package
  • JHipster generated the integration tests right next to unit tests; Michael normally keeps integration tests separate, as we also do at work
  • Michael saw that the generated unit tests had been created as public; he shared that this was needed for JUnit4 but is not anymore for JUnit5 so this access modifier can be removed to use default access within the same package instead
  • Michael instructed me when writing this simple test; we kept it really minimal having it only check if it contains a defined tag; if we would keep it we would implement all kinds of checks here
  • when running the test using the IDE we noticed that Lombok had not been recognized; we realized that we needed to activate annotation processing first for the project in IntelliJ
  • now @ToString was working!
  • we tried out the @Data annotation next which includes toString, equals, hashCode, getters and setters implementation
  • we removed the related methods and kept the builder methods that can come in handy when writing tests
  • to see if everything was still working we ran the related integration tests and saw that one test failed: equalsVerifier()
  • we checked this generated test and saw it was very generic
  • we debugged the test to see what happens; I learned that you can simply right-click on any previously executed test and directly re-run it from the offered context menu
  • we could not see yet why the test passed for the previously implemented equal method but not for the one generated by Lombok; and here we ran out of time
  • I learned that you can run gradle commands in a shortcut format, using only the initials of the command; e.g. instead of "gradlew integrationTest" just use "gradlew iT"
  • when trying this I learned that commands were not executable on Unix machines; Michael shared that you can run "gradle wrapper" to set the missing permissions; yet this failed, reporting an error in the gradle file! Another thing to look into
  • Reflection:
    • Michael: it felt like you were using IDEs for years already, you seemed quite code-confident already; you only waited for instructions when it came to writing the unit test, not sure what this was about; Lisi: I had a blackout in this moment, happens to me quite often especially when pairing, often have to look up syntax; Michael: especially as you shared you switch between different languages - I mostly stayed with one which then becomes second nature, routine
    • Michael: really likes pairing a lot; it's super important to read documentation together, to stay on the same page; was happy to navigate me through; Lisi: the session was super insightful and it was very pleasant to pair with you! I was happy to drive and have you navigate, especially with you sharing lots of knowledge, thoughts, your intention, shortcuts to increase efficiency and more.
  • TODO: check: when preparing for the session, it seemed that I messed up liquibase as the checksums did not match on first startup with a fresh database; yet later Michael said he could start the app without problems
  • TODO: when cleaning up, also separate integration tests from unit tests
  • TODO: make gradle scripts executable and commit, they are not running out of the box on a Unix machine
  • TODO: make unit tests private
  • TODO: investigate reported error in the gradle file
  • TODO: investigate failed test when using Lombok and fix, test if everything is still working, then switch to Lombok

September 8

September 15

  • GitHub reported a security vulnerability, asking to upgrade generator-jhipster to version 6.3.0 or later. The details stated it's of critical severity:
Vulnerable versions: < 6.3.0
Patched version: 6.3.0
Account takeover and privilege escalation is possible in applications generated by generator-jhipster before 6.3.0. This is due to a vulnerability in the generated java classes: CWE-338: Use of Cryptographically Weak Pseudo-Random Number Generator (PRNG) 
Generated applications must be manually patched, following instructions in the release notes:"
  • upgraded version and adapted the RandomUtil class as suggested
  • ran all tests
  • found that e2e tests tried to run with outdated chromedriver for Chrome 74 instead of 76
  • yet upgrading the related npm package still used the old one
  • could run e2e tests using protractor directly instead:
    npm install -g protractor
    webdriver-manager update
    webdriver-manager start
    protractor src\test\javascript\protractor.conf.js
  • found that three e2e tests now failed after some recent changes where I forgot to run them before; one was asserting for text that I changed, updated this one; two are failing due to changed pagination - need to think of a better solution here for assertions, need to revise these generated tests anyway
  • TODO: refine e2e test assertions for creating & deleting journal entries without depending on pagination configuration
  • TODO: find out how to run e2e tests with latest Chromedriver

September 18

  • reconsidered the open pull request from Toyer Mamoojee to remove Chai as superfluous additional framework
  • the suggested changes did not work out of the box, but threw the following errors: ReferenceError: expect is not defined and WebDriverError: element click intercepted
  • googling did not result into a quick solution
  • decided to keep the tests as they for now and pair on them later on as they need general re-work anyway
  • commented on the PR accordingly and close it for now

Two Exciting Talks Coming Up

TestBash Manchester is getting closer and with it my talk about my #CodeConfident challenge. I had two test runs already, one with members of my power learning group and one at a local meetup. The resulting feedback was invaluable to improve the talk further before going on the conference stage. A special shout-out to Toyer Mamoojee, Joรฃo Proenรงa, Viktorija Manevska, Simon Berner and Patrick Prill! Thank you all so much. The talk is already a lot better because of you.

About a month later, it will be time to enter another keynote stage. I'm feeling especially honored by this fact because of two points. First, it's the keynote stage of Agile Testing Days - the very first conference I've ever attended back in 2015. Second, I will go on this stage together with Toyer, sharing our personal journey as learning partners. If you wonder how it came to that, check out my latest guest post "What a Journey! From Conference Participant to Keynote Speaker" on the Agile Testing Days blog!

Saturday, September 14, 2019

TestBash Germany 2019 - A Day Full of Great Conversations

TestBash came back to Munich again this year! Despite taking place at my home town, I had planned to skip this edition due a full conference schedule this year. However, when I learned that Parveen Khan will present as well, I instantly bought a ticket. She was one of the first persons that exchanged thoughts about my #CodeConfident challenge with me last year so I was eager to hear her speak.

This year was the first time that workshops were offered the day before the conference. I did not join them yet I only heard good things about them, so that's an option to consider next year. For me, TestBash Germany started with a nice pre-conference meetup where I met many people also from the local community again!

The next day, the conference started. The wonderful Alex Schladeback agreed to live blog again so check out her great posts linked below to get a good impression of the presented talks. Besides that, here are the sketchnotes I created on the day.
The only sessions I did not record was the infamous 99 second talks and the speaker panel that got added to the program. The latter was the best panel I've ever seen at a conference so far! Great questions and lots of wisdom shared. If you happen to have a Ministry of Testing Dojo pro account, this is definitely a recommended watch.

The conference day was over, yet conferring was not. Many joined the post-conference meetup where I had wonderful conversations with so many awesome people - about everything testing, how to treat each other well, and the world. Thank you all for a wonderful time!

Sunday, August 18, 2019

#CodeConfident: Journey - Part 2

My Journey project made some progress! It's still far from where I'd like it to be, yet here's another part of my coding journal - including three more pairing sessions and a brief detour to my former Rest Practice project. Enjoy! :)

July 14

  • pairing session with Parveen Khan
  • when preparing for my pairing session, I removed all built directories and have the project build from scratch - which indeed proved my assumption that this would fix the profile service! :D
  • in the beginning I told her about the background of the app, did a short run through of the app, explained what had been generated and what I added or changed, and what are potential next things to do
  • Parveen wanted to explore the app and then do first changes together
  • when Parveen went through the app, we noticed several things, she raised lots of great questions and she triggered several ideas what to do next
    • TODO: the create / update challenge dialog showed the "influences" label in lowercase although it should be uppercase
    • TODO: the journal entry index page should sort entries by default by date in a descending way with the latest one on top (right now sorted by creation date)
    • TODO: for the index pages, the pagination thresholds should be defined, e.g. only showing 10 items per page
    • IDEA: Parveen asked me whether I wanted to use the app to instantly publish journal entries to my blog - and I thought of that indeed! Or at least to have a quick copy button for the description field so I could easily copy it over to my blog posts and make my own life easier
    • TODO: showing the challenge id for journal entries is not really useful, should rather show the tag instead (had that one on my list, yet it's great to hear confirmation from a different person that this should change)
    • TODO: the challenge delete dialog should not only ask for confirmation of the deletion but also inform what will happen with potential related journal entries
    • TODO: when trying to delete a challenge that still has journal entries, an unhelpful error 500 is thrown and the error message is not useful at all
  • then we went on with making small changes to the app and seeing their impact
  • for the challenge index view, we added a column for the hypothesis
  • for the journal entry index view, we wanted to do the same, yet decided not to dive deeper here for now and solve that puzzle yet
  • instead we changed the challenge detail view to not show the challenge id in the title yet the tag instead, and removed the tag from the rest of the detail page
  • IDEA: we found it would be great to see related journal entries also in the challenge detail view
  • Reflecting:
    • Lisi: having a practice project offers a great and safe learning environment to get your toes wet
    • Parveen: you need to have some idea of what app you want, then you start with small steps, then you get used to it, and you keep trying
    • Parveen: having me as trusted navigator gave her the confidence to do these changes herself; also, I gave her the confidence by telling her "it's fine, don't worry", "we can just delete it, we have it under version control" and letting her do it on her own and try things out; Lisi: it was super useful to practice explaining things I learned (some of them recently), especially in a simple way, really helps my own learning; this is also something such safe learning environments are perfect for
    • Parveen: was super great to both explore an app and make changes within just one hour! Now she understands why developers get into the zone, even forgetting to eat, doing things step by step; all this really helps to get into their shoes and create empathy
    • Lisi: was really cool, got a lot of ideas out of the session :D

July 14 (cont)

July 21

July 24

July 25

July 27

July 28

  • looking at examples from work
  • tried more things
  • really need to read more deeper first
  • nothing committed yet

August 11

  • pairing with Shivani Gaba
  • We planned to set out to pair on my Rest Practice project for which Shivani had feedback to share based on her experience with automated API tests.
  • Shortly before the call I realized that the framework used in this project was relying on Java 8 which I recently had removed from my computer so we could not run the tests. After spending some time back and forth how to install OpenJDK 8 so I would not need the Oracle version, we decided to go through the feedback first. (Later I found this site where I could download a pre-built version of OpenJDK 8 for Windows:
  • Instead of always using an explicit status code (e.g. 200), Shivani recommended to use predefined response status types so that the test becomes more readable and less error-prone.
  • In the current tests I asserted for certain body properties. Shivani recommended to implement a JSON helper instead which would provide utility methods like comparing the actual response to an expected JSON. This comes in handy especially if you have a lot more fields to test for. Also, this helper could offer generic convenience methods for creating a JSON object with certain parameters, adding and removing properties, or merging JSONs. Shivani offered to provide a sample - thank you!!
  • Shivani recommended to store the complete response in a variable from which we can then extract headers, body, code whatever we need/want to use for each test only what's needed for the specific test.
  • All endpoints were repeated in each test. Shivani would rather extract them as global enums (e.g. something like "Employee.GET_ID"), or at least declare them as variable, and re-use them, which would also increase readability and maintainability. If they change, you only need to change them in one place.
  • One open question in the practice project was how to get a response without given/when/then when it's only about the setup. Shivani shared how she would wrap the setup in a custom method (e.g. a post method) to hide this implementation details from the rest of the test, which makes it more readable and more maintainable as well.
  • At this point we decided to switch to my Journey project and have a look at how integration tests are done there. I explained the background and purpose of the project, what JHipster already brought with it, and that most of the tests got auto-generated and need improvement.
  • We took the test for creating a journal entry as an example to start with. I shared that what surprised me with the generated tests was that they check the database directly instead of calling the API again to verify if the record really got created. This might be good as it's about testing the integration with the database, yet I was not so sure about depending on the database state here. Shivani did not work with Spring and its test context yet, yet this surprised her as well.
  • Shivani shared that oftentimes integration tests are only used for the happy path, yet they are perfect for negative testing as well. For example, what if a mandatory field is not there? How do we handle that? We decided to implement a test for that to learn how it's done with the Spring framework. We discovered we could simply set a mandatory field to null before creating the DTO out of the entity which led to a bad request we could assert for.
  • We learned more about how the Spring framework builds requests and uses matchers for assertions.
  • We did not yet find a way how to provide custom error messages this way, like we could do for AssertThat().
  • We found a more readable and re-usable way how to write the tests, extracting the expected response and the builder which made the test more lean. We could do this globally to reduce duplication.
  • Reflection:
    • I was really sorry for the rough start and the delay before we could work hands-on together. I was not as prepared for the session as I wanted to be, had a bad day. In the session I gained a lot new insights and good practices how to improve readability and maintainability of API tests. Thanks so much for sharing your experience Shivani!
    • Shivani shared she was happy to pair with me. It got her really motivated to work on her own project idea now; something she had in mind for longer yet did not start yet :)

August 15

  • Pairing session with Gem Hill on unit tests
  • Gem is starting her own testing tour and I felt super honored to be the first stop on her tour! :D For more details see
  • We started with me explaining the background of my Journey project, what it's supposed to do and also the code generation part, showing Gem what's there.
  • We wanted to dive deeper into unit tests, and we had quite some generated tests to look into to understand them better and also critically question them.
  • We decided to dive into the frontend unit tests using Jest as Gem's team is now also working with TypeScript and Jest, so this was a perfect match; and also for me it was great as currently I am working on frontend unit tests as well at work in a similar setup.
  • I shortly introduced Jest and it's super useful watch feature where it runs all time in the background, running tests for changed files to quickly provide feedback while developing (my colleagues love that feature).
  • We decided to look at the test for the challenge detail component first as most simple starting point.
  • We went through the test from top til bottom, trying to formulate our understanding, making assumptions clear, voicing questions and parts we don't understand yet.
  • Then we had a look at the actual implementation to map it to what we saw in the test.
  • We noticed that the tested ngOnInit function only subscribed to the challenge, yet the subscription was not tested. This part is basic functionality provided from the framework - which is normally not what we want to test. So it's always the question where do we trust the framework implementation?
  • We had a closer look at the mocking part. Interesting to dive deeper into would be the ComponentFixture class which is provided by Angular as fixture for testing and debugging components. This is something we're not using at work so I'm curious to find out more.
  • The test is importing a test module which comes with a lot of mocks for services.
  • When looking at the test module we came across the same number used for ids again, "123" - it seems the JHipster developers used the same number for all kinds of ids. To make sure our test is independent we changed the id for the used challenge and the tests promptly failed. Here we could see how Jest points out differences. I am also using the Jest plugin for Visual Studio Code which even shows the error inline. That's a nice feature, yet for me it's easier and quicker to see what went wrong in the console output.
  • We decided to move on to another test and chose the assumed next smallest one, the one for the delete component.
  • Here we came across the fakeAsync() and tick() methods which caught our interest. Once again we enjoyed the Visual Studio Code feature to hover over a method to get its description :) We learned that tick() simulates the asynchronous passage of time.
  • We discussed spies which allow to mock return values of functions to stay independent from other parts of the app.
  • Gem suggested to interact with the locally running app and see what's actually happening. This triggered the idea that I should install the Redux DevTools so we could inspect even closer. For now, we deleted a challenge and checked the respective requests sent. This way we saw that opening the delete dialog sends a GET request for the respective challenge - why? We had a look at the implementation and saw the defined selector. Was this the reason? We checked all references yet it was only defined here. We could not yet clearly see how things connect here, something to find out later.
  • We had a look at the implementation for deletion and found a method to close the dialog on cancel. Once again we asked ourselves, is this worth writing a test for? Is it worth the effort of maintaining such a test? This should normally be code that's not touched very often so it's probably not worth it.
  • We discussed mutation testing, an approach my team is currently trying out and Gem used quite often in the past. This is a great way to see if our tests are actually valuable and would detect issues and where we are missing tests. In my team we plan to use this as better guideline for what to write tests for than mere coverage which is not as helpful.
  • Another part that caught our interest was that on deletion, an event gets broadcasted and the dialog closed. In the test, however, not the actual methods but respective spy methods had been used: dismissSpy() and broadcastSpy(). We assumed this makes the unit test even more "unit". Another point to dive into deeper!
  • We moved on to the next test, this time the more complex one for updating the component. Once again, we started from top, including the imports. Gem called out that it's interesting that a HttpResponse is imported, came unexpected.
  • We also once more saw that the test was named "Challenge Management Update Component", as it was generated - we both found it weird that the word "management" was included here.
  • We had a look at the test for saving. It's really great that the generated tests included comments for given / when / then which made them easier to read and understand. This test, interestingly, also showed a comment for the tick() method which the other test lacked!
  • We checked the differences for updating and creating a challenge, both on test and implementation side.
  • What caught our eye was a boolean called isSaving. We wondered what it would do? We decided to first interact with the app again to see what's actually happening. A GET request, a PUT request. Nothing unexpected. Then we had a look at the implementation for isSaving. And we found this boolean is set to true only during the time the actual saving process is taking place. The value is only used in the tests. We both found this very weird. We would not expect this kind of implementation just for the tests, we never saw something like this in the products we worked on. We would only expect to add this during debugging to see what's going on; yet for the test it felt strange and it didn't feel to provide any value. I took a note to remove this implementation later on.
  • Reflection:
    • Time was flying and we soon got to the end of our 90min timebox.
    • We both found it really nice to take time and dissect unit tests, challenging them, asking why, finding issues and questions while learning about the product. It proved again the point that you don't need to know code to the extent a developer does to be able to use your skills and strengths as a tester on code.
    • I absolutely enjoyed our pairing session. When pairing we both bring our skills, expertise and knowledge to the table, and Gem and I had both overlapping and differing knowledge which fit together nicely.
    • When pairing I realized how valuable it is to also look at the imports! I normally skipped them, yet we can already raise questions and improve our understanding here.
    • It was great that Gem reminded me to interact with the real product again. I tend to focus too narrowly on what's in front of me which in this case is the code base, so it's easy to get disconnected. I noticed this several times now already during my code-confident challenge, catching myself on things I should know better. Gem pointed out this needs a context switch which is not easy. I agree, and it also increased my empathy towards my developers a lot more.
    • Gem noticed me taking notes and realized she should also think about how to take notes better for her next stops on her testing tour.
    • Gem found JHipster really intriguing. All the weirdness and bloatiness aside it seems to be really valuable to quickly kick-start a project and be able to learn with it. She plans to generate her own project for further learning purpose.
    • We both found it super easy to pair with each other, even though we only met a few times in real life or virtually and paired only once before. Time flew by and we both were energized afterwards!
    • I'm super looking forward to hearing more about Gem's testing tour and the lessons she will learn on her way :)
  • TODO:
    • install the Redux DevTools
    • remove isSaving boolean

What else?

The next Ministry of Testing Power Hour is coming up! I'm excited to be the one answering all your questions about pairing and mobbing. Just ask them at the club, and on August 22nd I'll answer them from my experience.
Also, the call for collaboration for European Testing Conference is open! Seize the opportunity to participate in this amazing conference next year by proposing your session ideas. The submission alone is worth it as you will get invaluable feedback.