Friday, May 11, 2018

Testing Tour Stop #8: Pair Accessibility Testing with Viv

On today's stop on my testing tour, I had the pleasure of pair testing with Viv Richards. I got to know him via SwanseaCon. He was the first one to accept me as newbie speaker last year and gave me the opportunity to speak again at this fabulous conference this year! I'm really glad it worked out to have him as part of my tour. It was a fun session full of insights.


Accessibility - The Neglected Child

Viv left it to me to choose a topic for our pairing session. He said he would be happy to explore any area I prefer or am comfortable with as sees himself as "jack of all trades master of none". I so relate to that! Well, I decided to go for accessibility testing this time. Why? In my opinion this is a very important topic, often overlooked or postponed. I have never had the opportunity to actively work on a product where this was a requirement, or even considered in any way. I read some things about it, but really lack practical experience. To add to that, I knew Viv had experience in this area. Back when he was still in a developer role, he worked on a product where accessibility was a big topic.

To prepare for our session, I researched some pages which would help us kick it off. As shared in the post about my last stop, I don't like to limit the scope of our sessions too much. I prefer to keep enough freedom for us to explore in any direction it might lead us; the main goal is learning. Still, I'd like to have some options prepared upfront. Here's what I found.

Hands-on Testing

For our session, I decided to not go for one of the demo pages, but rather try a productive application and see how accessibility looks like in the real world. I chose the web version of the todo application Remember The Milk. I'm not using it myself, but tried it out years ago when searching for a task management solution fitting my needs.

We started the session by imagining we had no mouse available but can only use the keyboard to navigate the application. We could successfully sign up for a new account this way, but then quickly faced problems. It was not obvious at all how fields are ordered and we most often missed visual feedback where the current focus was. Viv shared that a screen reader tool would have problems with that. But even only by just not using a mouse we failed to navigate to certain fields, like setting additional options when creating a new task. As we stumbled heavily from the beginning on, we decided to switch to simulating a different kind of user experience.

What if we were just shortsighted and didn't have an optical aid at hand? We set the browser zoom to 200%. The page looked not as nice anymore, but was still fully functional. We could reach all page areas and elements. Same when reducing the zoom to a lower value than 100%.

But what if we only had one hand available (maybe carrying a child in our arms), and that might not be our usual one? I'm right-handed, so I tried to use the mouse with my left hand while using the application. Though this was slower it worked out well. Interestingly, during this time we came across functional application behavior which we would not have expected.
  • We just wanted to add a reminder to one of our todos, but doing so the application took us to the settings page. We should first define a device to get reminded on. Hm. Okay, we chose the computer. And the app instantly opted us in for all kind of notifications. I don't like it when they sign me up for everything by default, it just leaves me with a bad feeling.
  • The settings dialog showed a save button - but inactive. Why? We found it was only meant to save any changes made on the kind of notifications we'd like to receive. Not obvious, not nice.
  • Going further, we failed to define a reminder for a specific time; only days or weeks before our due date were available. For me this would be an important feature for a task management tool. But okay.
  • Then we discovered that subtasks can be sorted by drag and drop. There was a configuration menu, but it only offered one option, drag and drop sorting, and it could not be unchecked. Really strange! Only later I found that the related help text explained that subtasks can only be sorted the same way as the original task list they belong to.
Well, the drag and drop functionality triggered the next idea. What if we could not use JavaScript? Viv shared that in his experience this was a valid case. They had to first develop without using JavaScript at all, which meant they needed a lot simpler UI. To simulate this in our case, we opened the Chrome Dev Tools and and disabled JavaScript in the settings. We learned that you need to keep the Dev Tools open to make this work. After refreshing the application page, we found it could not be loaded at all. However, it also did not provide any further feedback why. At least a notification that JavaScript needs to be enabled would be needed to not get people lost.

We decided to start using tools in general to get an overview of existing accessibility problems. Viv recommended the Chrome extension WAVE Evaluation Tool. It really presented a nice overview of the current page's accessibility issues as well as explanations why these points are considered problematic. This way we found issues like missing labels for input fields, missing alternative descriptions for images, structural issues like having h2 headers but no h1 header to start from. We found that ARIA roles, states, properties and labels were provided as expected. To my surprise, the tool also pointed out that an unordered list was used! When researching later on, I learned that incorrectly defined unordered lists are easily comprehensible as a formatting element for sighted users, but present a problem if you have to rely on a screen reader that interprets them as single paragraphs without providing an outline. WAVE also offered the option to see the page without any styles as well as to test for sufficient contrast ratio between foreground and background colors. In the case of Remember The Milk some elements did not provide sufficient contrast to fulfill the AA or AAA levels defined in WCAG 2.0.

Evaluating the contrast triggered us to consider color-blindness. There are Chrome extensions to simulate this as well. We tried Spectrum and I want to see like the colour blind, both offering different display modes for the page. We realized we didn't know the technical terms to describe different experiences when it came to colors!
What helped me to get a quick overview on these different types was the color blindness table of the color-blind npm package. All in all, Remember The Milk did quite well, only with low contrast we deemed it hard to use.

Another idea came to mind: what if we saw everything but have a hard time to digest the information? Like if we struggled with dyslexia? Chrome extensions to the rescue! Also for this case we found simulators. We tried dyslexia simulator first but couldn't get it working on our application page, also when adapting its settings. We were not sure if maybe our brains sorted everything out automatically, so we tried another Dyslexia Simulator - and instantly got closer to understanding what it means to have dyslexia and view a web page! This simulator scrambled all texts constantly, we had a hard time to focus. It took us way longer time to recognize the words, and we couldn't watch it for long.

So what about screen readers? Viv recommended the free NVDA (NonVisual Desktop Access) so went for it. I was surprised by the audio feedback we received while the application was installed and set up! Of course this only makes sense. It just showed me again how much I do not know about different kind of technology experiences. Also, I instinctively used the mouse first, and the screen reader instantly commented on everything I hovered over - until Viv told me blind people would use the keyboard not a mouse. So we tried it on Remember The Milk. The speech output was very fast and I had a hard time understanding but at least could grasp some parts. Especially I understood that the output did not provide helpful information. For example, after adding a new task I heard that "the input field is empty." So what, how should that help me? Why not providing the information that I could instantly create yet another task? Well, here it showed in practice what WAVE pointed out - the input field did have any contextual label.

As final part of our session, we went half-way through a list of tip when testing for accessibility that Viv had found. We noticed that some of the listed points were considered in our application, like not labeling links with "click here", and other points had been disregarded, like the missing h1 tag. Font size was another remark triggering the idea that although we did try to increase the browser zoom, we haven't tried to increase the font size in general on the operating system. When trying to do so, we had a hard time finding the related setting in Windows 10! Seems there is only the option to scale everything at once, text, apps, and other items. I cannot tell whether this might be a good way to handle it or not.

After our session, Viv provided me his notes and thoughts. Here's what I did not mention already.
  • W3C Accessibility
  • JAWS (Job Access With Speech) - a very good screen reader
  • Another idea: Does the page have regions defined to enable a user to quickly jump to sections of the page using a screen reader? (This would have been something with more time to test in the screen reader)

What worked well, what to improve?

In the very beginning of our session we struggled with the technical setup. No matter how often I had video calls with many different persons, this just happens and I have to remember to take it into account. At first the computer wanted to install updates. Then the call could start but the microphone was not recognized. Finally this could be solved but screen sharing failed to actually show the screen. In the end it worked and continued to work until the end of our session including sharing control.

As with several previous pairing partners, we chose to pair the strong-style way with one being the navigator and one the driver, switching roles every four minutes. Although we didn't always switch in time, this worked out pretty well. Conversation flow and collaboration was once again very smooth.

The session proved very valuable for both of us. To further improve it, Viv came up with the idea to maybe rather focus on a small part of the application. Accessibility is a huge area in itself. Spending more time on a smaller feature might have helped us. Something to keep in mind for future sessions!

Viv sees accessibility as a topic similar to security. Many times we are facing lots of technical debt in these areas. We should be mindful about it from the beginning when starting a new application. I totally agree with him. When starting a product from scratch accessibility is often neglected, and then you end up with a legacy system where it's hard to build it in afterwards.

For both of us it was interesting to pair with another tester remotely and see other people's approaches. At work, we both rather pair with developers which is really valuable but has a different outcome. Viv also pairs with other testers but rather to instruct and provide support. Not being on the same level causes different dynamics. He shared that the strong-style way of pairing would help a lot, with the back and forth you really have to contribute. Often people don't see which kind of value they can provide, like when pairing with developers to write unit tests. However, there's always something to be shared, always something to offer. Wise words.

Something to Keep in Mind

If you take one thing of this post with you, then let it be this. We all experience the world around as and technology in a different way. Most people encounter the one or the other barrier when doing so. This doesn't have to be permanent, it can also be temporary or situational. So let's keep accessibility in mind to develop valuable products.

No comments:

Post a Comment