Only a Tester? A Part-Way Through Data Analysis

Testers can come from many backgrounds and play multiple roles. Testers are more than “just testers At the moment, many testers don’t feel supported by their tools. My research revealed stories of anger, fear, frustration, and fear. I also realized the false role of tool usability and the importance to understand who uses those tools.

Isabel Evans is my name and I am interested in the experiences of testers using automation and tools. I have been researching “Who is testing?” I’m curious about their background, experience, and readiness to take on a test role. My data contains the experiences of more than 250 testers around the world. This data gives insight into the diversity of their test subjects and the complexity of their jobs. People who test are often from diverse backgrounds, with sometimes surprising careers and interests. Testers are more than just testers . Many testers don’t feel supported by their tools at the moment. My research revealed stories of anger, fear, frustration, and fear. I also realized the role of usability in tool usage and the importance to understand who uses those tools. Better understanding of tools will enable testers to provide better support.

These are the Highlights of this Article

  • Testers are drawn from diverse backgrounds and can play complex, multifaceted roles.
  • Testing tools do not always help testers achieve their work goals.
  • This can lead frustration and even to the abandonment of the tools
  • The solution is not complete with improving the usability of the tools.

You’re wrong… then follow the data

Since the 1980’s I have been testing. I started collecting stories from testers to learn about their personalities, work styles, and experiences. I also collect data on their experiences using the tools. To improve the support tools provide testers, that’s what I do. It was not the right direction when I started it.

2015 was a year in which I worked in the industry as a tester and test manager, a consultant and a trainer. There were trends at conferences and in industry about test tools and automation. It was becoming more common to automate test execution. There was also a lot of discussion about whether this was feasible or desirable. It was common for testers to improve their programming and technical skills. There was also a discussion about the usability of tools. I was curious if more people could participate in testing by making tools easier to use, including those with business domain knowledge. As a computer science student, programmer, and then, from the 1980’s, as a professional in quality and testing (see Figure 1), I have spent decades in the industry. To study the experiences of testers using their tools in a disciplined and rigorous way, I was a postgraduate researcher at the University of Malta in 2017.

Over 250 people have provided me data so far from all over the world. They come from diverse backgrounds and have different experiences. Their stories have been both surprising and informative. These data came from interviews, workshops, as well anonymous surveys. Each round of qualitative data analysis uncovers more information and improves my understanding. This is qualitative analysis, which means I am listening to people’s stories and mining their details. This is a large sample for qualitative research.

It’s not a simple picture…

Data shows that simple questions like “Who are they testing?” “How do they test?” and “What tools are they using?” are difficult to answer.

After setting out to find usability issues by asking open questions (see tablex in the first studies), I discovered a richer source of information than I expected. Following that data stream and listening to people who were testing it, I was able dismiss my previous hypothesis and create a new one. Perversely, if you try to make tools more usable, it can sometimes make testers’ experiences with the tool less enjoyable. After examining the data, it became clear that testers are very diverse and that not all tools work for everyone. The tools must be more person-centric. Tools designers will likely need to be guided in this area. This requires a better understanding of the people testing software and their goals. It’s not an easy task to create tools that can be used by people who are testing software. In the next sections I’ll discuss why and how I came up with the idea for a People-centric Evidence-based Design Framework. (working title “the PEDF”, but I will mention a fun acronym later! Figure 2 shows how I’m building the framework.

What is a tool? Automation Doesn’t Mean Only Automation

Asking people to list the tools they use for supporting testing will result in a long list of well-known tools that can be found through open-source communities or tools vendors. Tools can be used to support many aspects of testing activities, not just test execution automation. Different tools can help you analyze data, report it, plan testing, assess risk, communicate with other teams, and even share that data between them. Some of these tools are software tools while others are not. To better understand their meanings, I allowed them to tell their stories and offer their ideas on what a tool is. At the moment, I am open to trying different things. I might also try to categorize later, or adopt another categorization if it suits the data.

People who test face difficulties

When I began to analyze data from interviews, workshops, and surveys, two things struck me. People were describing multiple challenges, and they were experiencing higher levels of emotion than I had expected. Figure 3 and Figure 4 show some of their stories.

Testers expressed concern about how tools, which were apparently designed to improve efficiency, waste their time and increase the effort they have to make. Testers described not being able install and configure tools easily, not being collaboratively able, and not being capable of communicating their data. People frequently mentioned usability as both a desirable quality of a tool as well as a problem in using it. Other technical characteristics of the tools were also mentioned as obstacles to success, such as the inability to maintain test set over time, security barriers to accessing tools and poor tool reliability. Many people cited the poor interoperability and waste of effort as a reason for the inability to share information across the team.

The topic of concernFrequency
Quality in use / usability511
Problems and challenges232

Table 1 shows that 232 different issues and obstacles to tool success were identified by the first 111 respondents to this 2018 survey. After analysing 111 responses to the 2018 survey, I was surprised at how many respondents didn’t just give an objective and impartial account of using a tool. Many of the respondents displayed strong emotions. I reviewed the data again and asked two additional researchers to review it. I found that 35% of survey responses expressed emotions. Figure 5 shows a portion of the spreadsheet I used for the analysis. It records instances of participant emotional responses by survey question. Participants were asked to relate issues to emotions and consider both positive and negative emotions. Some respondents did not show emotions while others displayed strong emotions and even passion in their responses. Table 2 lists the survey questions as well as the emotional responses.

Figure 5 Part the Emotions Spreadsheet during Data Analysis

Question# emotional reactions# times positive# times -negative
SQ1Let me know a little bit about you735
SQ2Tell me about your tools stories25622
SQ3It was easy or hard?615
SQ4Do you remember ever avoiding using a tool?12217
SQ5What are the qualities of a great tool?312
SQ6Which tools are you using in your organization?344
SQ7Which tools do you use to accomplish your goals?000

What’s the deal? Why is this important? It was not only about the purpose of the tools, their usability, and their technical attributes that I began to think about, but also the impact that technology has on those who use them. People felt stuck because tools impeded their progress. People suggested that managers and organizations adopt tools believing they can solve all problems. They expressed a range of emotions, from being scared or frustrated to feeling proud about what they had accomplished. Some felt completely powerless. However, tools can increase effectiveness and efficiency.

It is not enough to guarantee happiness…

Some comments about tools suggest that tools with attractive user interfaces (UIs), were not satisfactory and provide poor support for testing. The tools were described by testers as “Look cool, but …”” and they also had poor interoperability and test maintenance. Figure 6 shows that the designers of these tools were more concerned with the user interface (UI), than the support for testers in their activities. They also wanted to provide a great user experience (UX).

Good interaction design is essential for the UI. While a good UI can improve usability and UX, it is only one part of the story. The UI design is only part of usability. It also takes into account the context and goals of the individual to ensure they are effective and efficient in achieving their goals. This includes understanding the person’s work flow, their preferred work method, and their skill level. Some IT professionals I spoke with associate “usability” with “ease-of-use” and “learnability”. However, it is important to remember that these are only two aspects of what a tool designer must consider. How one interprets them both depends on the context and skill level of the individual using the tool.

It is important to design usability to support the user of the tool. However, this is not enough. Tool designers must also consider “Quality of Use” when designing tools. Good usability must be matched with technical attributes like maintainability, security reliability, performance, interoperability and reliability to allow testers to be flexible and free from risk.

The overall experience of using the tool and its provider builds the testers’ trust, credibility and experience of flow and usefulness to create the overall UX. UX Design considers all of these factors. UX and past experiences influence the emotional response to a software tool. Let’s call this TX for testers: their experience with the tools and automation.

The illusion of usability is only possible if you focus on the interface and the superficial ease of use. The illusion of usability is a distraction that can lead to frustration and a less enjoyable user experience. When designing tools, we focus on attractiveness more than utility. This can lead to a tool that isn’t worth the effort, but looks great. We run the risk of creating a tool that is not flexible and learnable. If we limit our focus to one user group or design personas superficially, it will be dangerous. We may not be able to support growth and change for those who use the tool if we don’t understand the personas. We can’t offer Quality In Use if we neglect maintainability, performance, security and so on. We design tools that are not long-lastingly supportive.

It is important to get to know the testers better, not just by their job titles, but by what they are trying to achieve, how they feel, and what their preferences are. My current research focuses on understanding WHO, WHAT, and HOW they test, and what tools they require to support them. They are more than just testers. They come from many backgrounds and have different preferences in how they work.

They are not just testers.

There are many backgrounds that have contributed to the research. Before becoming testers, people have worked in many different roles, including computer science, international relations, boatbuilding, and music education. Figure 7 shows some of these jobs that testers had before they entered testing. Similar results can be seen in Figure 8, which includes data from the 2022 study. We see that final academic qualifications range from not-high school graduates up to PhDs. It’s interesting to see that both the 65-year-olds and those 34 years and younger have at least a bachelor’s degree. The 35-64 year olds were not high school graduates. The next question is: When did people in older age groups get their degrees? Were some of them mature students? A second question is whether the experience gained in a job is equivalent to a degree. Some participants cited their years of work experience as proof of their professional status, rather than their academic qualifications. Both of these areas are still my focus.

Participants had access to a variety of training and qualifications as part of their testing roles. Participants reported that they did not have any training in testing. Others were self-taught, while others learned through conferences and online courses. Respondents also attended courses like ISTQB TMAP ITIL, BBST and TMAP. This data is still being analyzed by me to determine if there are any patterns or trends in the backgrounds, training and approaches of people.

There are some interesting data emerging about hobbies. Figure 9 shows the results of an analysis of responses to an open question regarding hobbies and pastimes other than work. These responses were analyzed to produce five statements about hobbies. These include: (1) hobbies that are very specific (e.g. numismatics versus very general statements (e.g. sport), (2) more personal pastimes (e.g. Solo hiking, to group/team activities (e.g. (3) More arts-oriented (e.g. singing in a choir). Painting to be more STEM-oriented (e.g. astronomy), (4) more indoor (e.g. Cooking is becoming more outdoor (e.g. Bird watching) to more outdoor (e.g. Seeing movies to being more active (e.g. Making bread and music. The graph shows that testers are equally distributed across specific and general statements about hobbies, a preference for indoor and outdoor pursuits, as well as a preference between individual and group activities. It seems that there is a tendency to enjoy arts activities and more active/making hobbies than passive/watching hobbies. This is what I am trying to find out in the follow-up to the survey. Are people’s hobbies indicative or indicative of any talents they bring to work? Are people’s hobbies compatible with or complementary to their preferred work practices? These are still open-ended questions.

In the survey 2022, I asked respondents about their job titles and if they matched their actual responsibilities. Many people have extremely complex responsibilities with many lines of communication and multiple tasks. It is not easy to match their job titles and activities. Another indicator that testers don’t “just” test their products is this:

Leave a Reply

Your email address will not be published. Required fields are marked *