Wednesday, March 13, 2019

#CodeConfident: Serenity Cucumber Practice - Part 3

This is the continuation of the coding journal on my first public code challenge serenity-cucumber-practice. If you haven't done so already, it's advisable to check out the previous entries first.
Although this part reflects only one more step in this practice project, its impact was huge. Parveen Khan had scheduled a pairing session with me. I was really looking forward to it as we had a call end of last year where she had shared that she was on a similar learning journey as I was!

March 9

  • #CodeConfident live pairing session with Parveen Khan
  • shared our challenges and struggles, what we want to learn
  • demoed the existing project, what's included, how it works, how I got started
  • Parveen agreed to give it a try, she drove and I navigated her through
  • implemented a new scenario this way, sharing a product to a friend
  • could re-use the given step; the when step was easily done; regarding the then step, we could not yet find the correct locator as it hit the wrong element; here we ran out of time
  • we did all that in very small time frame
  • in retrospect, Parveen shared how scary but how great it was to do it herself, but now she understood what we do; she only wrote feature files so far, not more; asked lots of people to explain the concepts; only when she now did it hands-on, getting instructions she understood the connections; once again the magic of strong-style pairing! really really happy about the outcome; great practice for myself as navigator as well, felt quite confident in what we were doing
  • we agreed to have a second session, Parveen will practice as well and then we can tackle the next challenge together
  • TODO: fix, clean-up and commit

March 9 (cont)

  • fixed locator
  • cleaned up, adapt code to domain language
  • committed scenario


Meanwhile...

Shortly after Parveen and I had our session, the program for TestBash Germany 2019 got released. You can't imagine how delighted I was to see Parveen on that program! Originally, I had planned to skip the 2019 edition as my schedule is very tight this year and I promised myself to take care of myself. However - Parveen now really makes me buy a ticket! I simply cannot miss the chance to see her speak. She has so much to share we all can learn from! Oh, and if you cannot make TestBash Germany, she's speaking at Agile testing Days USA as well ;)

Sunday, March 3, 2019

#CodeConfident: Serenity Cucumber Practice - Part 2

If you've read the coding journal of my first #CodeConfident challenge, you know that I've called for collaboration. The great thing: people followed the call! Many thanks to those who gave feedback on my first public code challenge serenity-cucumber-practice. Here's how the story continues and the project evolved.

February 7

  • received the following code review feedback from Peter Kofler:
  • "The Java language `assert` should not be used in tests. It can be disabled (is disabled by default) and does not give a good error message. I recommend using `assertEquals(expected, actual):` I see Serenity comes with AssertJ includes. I recommend using that. Open http://thucydides.info/docs/serenity-staging/ and search for assertj."
    --> switched from Java assert to use AssertJ assertions (http://joel-costigliola.github.io/assertj/assertj-core-quick-start.html); now understood why the Java asserts did not provide me as great messages as the Groovy asserts we use at work (note to myself: should have questioned that earlier...)
    --> "Thanks a lot! I already wondered why the asserts did not provide me a decent message. Switched to AssertJ assertions now, definitely a way better option :)"
  • "You do not have to create a branch for a work in progress test. Instead mark it as pending. E.g. Cucumber's "@wip" in the feature file ignores the scenario. Here are some options https://stackoverflow.com/q/3064078/104143"
    --> added option to ignore certain features & scenarios, excluding the tag from the Cucumber test runner
    --> "Thanks! I am aware I don't have to create branches to try things out. In this special case I decided to go for a branch as I was already suspecting that my approach is flawed and won't work. I only wanted to keep the work for reference and to receive feedback on it. Still, I now added the option to ignore single features or scenarios, something I was aware of but had not included here. Thanks for the hint!"
  • "All methods in this file are too long. Maybe split out chunks of coherent logic into helper methods. E.g. line 125-132 could go into a `createProductJsonItem()` method. Also building the request should go into another method. The remaining top level method will be easier to understand."
    --> did not think of this as this was "only" playground code - and yet it still makes any code harder to read, even if it's throw-away code; so, split parts into helper methods
    --> "Good point, thanks! I've now split parts into helper methods."

February 25

  • first #CodeConfident live pairing session with João Proença
  • I showed the project status, shared the struggles with the remove item from cart scenario
    • João: let's first see what it really does; inspected the add to cart button
    • we learned we can simply use the URL call sent on adding an item to the cart ("http://automationpractice.com/index.php?controller=cart&add=1&id_product=1&token=e817bb0705dd58da8db074c69f729fd8"); learned that the token parameter is not needed, but this call only works in case you have a session first, so need to access the page in general in the same browser session first; --> we implemented this approach and it worked! :D
    • there are still problems with it, like the usage of a hardcoded product id; we are depending on its existence; João would rather use the product name or something; using a proper API to create our test data would be best
    • João is using a lot of subcutaneous testing for his product, using an API that's exactly called as the UI calls it, would increase testability here a lot
    • test data management is a really complex topic; they first tried the approach to set up the complete data before any test suite run, which turned into a big monster; now they are offering methods to create test data on demand when needed for each scenario; we use the latter approach at work, too
    • I learned what I missed when searching for a solution myself: the exploratory mindset when approaching an automation problem, I already jumped to approaches I knew
  • another problem João sees: usage of product ids in the feature files is not nice; would rather use the product name
    • João explored the URLs found in the source, could not find something suitable; "?name_product=Blouse" or "?name=Blouse" were not working
    • ideas: we could 1) maintain a sort of dictionary in the code to map id and names, 2) use any product randomly but we would loose deterministic characteristic of the test, 3) hide the product id to lower levels as we don't care which one is used (in our product we often use the third approach)
  • I shared my challenge to hover over category menus
    • João would try an implementation to hover over the element then debug the test and see what's really happening; would probably need to wait that submenu is displayed, would check that style is changed to block
    • afair the problem was finding the element to hover over; João: maybe the element did not have the hover, seems the <li> has it
  • retrospective: for João it was a really cool session, these kind of exercises always trigger interesting discussion, e.g. around test data management, that the Serenity session offers to pass over variables between steps, and more; was really productive for him; Lisi: was really cool, the biggest learning was to see João's approach how to tackled these challenges, realized I got stuck in my boxes when tackling different challenges but should rather combine all skills I have; the short coding part together was nice too, would like to see more of that in future sessions, but also had really interesting discussions, was eye-opening overall; João: I'm not coding all the time, I'm not the person for that, rather trying to figure out shortcuts like calling the URL, I saw the URL and found it interesting so I tried things there; it's really about combining the skills to the problem, it's not just coding
  • TODO: commit solution, hide product ids, try again hover scenario, add update quantity scenario
  • TODO: update blog post / create new one, including code review from Peter and session with João; ask both first whether they are okay with that!
  • TODO: select next challenge
  • reflection before session and afterwards: we did it now quite similar to Angie Jones' proposal: http://angiejones.tech/hybrid-tests-automation-pyramid/
  • realized that the needed URL was only given when inspecting the "add to cart" button on the search page, not on the product page itself!


February 25 (cont)


February 26

  • hid product ids from shopping cart scenario steps, don't care which product it is
  • TODO: try again hover scenario, add update quantity scenario
  • TODO: create blog post with code review from Peter and session with João
  • TODO: select next challenge

February 27

  • aimed to implement hover scenario
  • tests suddenly get ignored and therefore skipped! found it's related to the tags used to mark parts to be ignored; as soon as no scenario is ignored the tests run; found that ignoring single scenarios still worked, but ignoring a whole feature file will lead to the not ignored one to be skipped! updating Gradle dependencies, rebuilding project, restarting, etc. did all not solve the issue; updated Serenity libraries to latest version --> now feature level tags work again!
  • found that TightVNC can auto-scale the window, more useful for my case than TigerVNC on Windows
  • found a way to navigate to the desired category (not elegant, but it's working)
  • found that the assertion failed and yet the scenario was marked as passed! IntelliJ warned that the assertion result is indeed ignored; using a JUnit assert the assertion throws... lesson learned: sometimes it helps to walk away, take a break, then have a look again; this way I found that I set a bracket at the wrong spot so I did not really assert for something
  • cleaned up
  • TODO: add update quantity scenario
  • TODO: create blog post with code review from Peter and session with João
  • TODO: select next challenge

February 28

  • implemented update quantity scenario
  • got annoyed by rewriting camel case to snake case, so installed this plugin to switch easily: http://plugins.jetbrains.com/plugin/7160-camelcase
  • speed of implementing very first attempt heavily increased the last days when practicing more regularly :)
  • remembered how to wait for expected condition
  • remembered how to cast different data types in Java
  • remembered regex for numbers
  • realized code style inconsistency, having step definitions in snake case and all other methods in camel case; unified it in favor of camel case
  • TODO: create blog post with code review from Peter and session with João

What else, what's next?

A few more pairing sessions had been arranged already. We will decide on the challenges to tackle together when the sessions get closer. In case you'd like to become a part of my journey, feel free to schedule a session with me as well.

Although I had not planned to do so this year, Angie Jones triggered me to submit to Test.bash(); - and in the end I could not resist any longer. The topic? My code-confident challenge of course. Fingers crossed!
Besides that, my Test Automation University course "The Whole Team Approach to Continuous Testing" is finally recorded and about to go live soon - which means I have my focus back on becoming code-confident. I'm about to choose my second code challenge these days, so stay tuned for more!