Risk Based Testing: Q/A with Jenny Bramble

A cat has nine lives, but your software doesn’t. I imagine Jenny will say something like this. Jenny started her career working as a support engineer. As a support engineer, she learned that software is not simply something that does its job but it makes their life better.

Risk Based Testing with Jenny Bramble - MoTATL April 2018 Meetup

Her role that acts as the ‘translator’ between customer requests from support and the development team. Her love of support and solving problems let her find a sweet spot between empathy for the user and empathy for the dev team. This has served her well in her current role as a Software Test Engineer at WillowTreeApps.

Q: Do you use test automation tools, how do you use them?

Jenny: I have been primarily a manual tester, but our team is working to bring automation into our definition of done. I do the interesting things while the computer does the “boring” things. That’s what I really like about automated UI testing. These tools are really good at long-running tasks computing numbers and identifying bugs. Humans are good at identifying user flows and edge cases.

Q: How do you know if a test case is interesting?

Jenny: What I really mean when I say something is interesting is I think it’s different. It is not your typical path or your typical error path. It could be an edge case or an unusual way to get into the product.

Q: What kind of data do you rely on to find interesting user flows?

Jenny: We use analytics for everything. We know what sort of users take what path which informs our decisions on where we should focus our efforts in testing.

Q: Is your risk matrix dependent on that data?

Jenny: It has to be. Whenever we talk about risk matrix, we need to be firm in probabilities of each risk. I don’t want to randomly select a number because it ruins the hypothesis. I want to say this is risky because this is a failure I have seen in the past.

Whenever you make stuff up, you dilute your purpose. You want to make an educated guess. You want to know what has failed before, what is failing now, what have we not seen fail. You want to know what users do and create an informed model of user behavior. We call these behaviors personas. Each has a particular set of actions they do and that’s really interesting to me. Then you can break down risks by persona. A persona can be any collection of behaviors that a certain type of user displays.

The QA team has to test the product. We have to use whatever is the best source of information to test the product.

Q: If a developer is experienced, are the risks less when they are developing a feature?

Jenny: I think the risk is different. I have worked on a team of 5 to 6 people and every developer has a different style.

Q: A lot of QA teams are using automation today. Is that the direction QA testing is going?

Jenny: Yes, change is inevitable. Apps are becoming more and more complex. We need to free up testers to do complex user flows. Let computers do the repeatable stuff. Manual testers are here to do deep dives, find edge cases and create personas to make our products better. The more we can offload testing onto automation, the more time testers have to be creative thinkers.

Q: Can we teach creativity to testers? Can we teach testers to be creative?

Jenny: Creativity is a difficult thing to teach. However, there are techniques like mind maps that help out a lot in getting people to expand their thinking about features and uses.

Q: Can AI learn to do creative tasks like these?

Jenny: Absolutely! Once we get to a point where AI really starts to learn on its own, we can teach it to follow these paths. For me, that’s a little scary. At some point, we will reach a place where we won’t understand why the AI is making the decision it has made. But, to be honest, that is how I make most of my decisions. That’s interesting to me. I love the idea of sitting down and pair programming with an AI. It’s scary but really interesting.

An AI is similar to a child. Like a child, you can teach the AI how to do testing. Personally, I think that’s pretty cool.

Machines are complex and we need them to tell us what they are doing. The fact that I, or my fellow human beings, can create something that we can’t understand is awesome. I am excited for the future.

Q: What will happen to testers when AI learns to test complex apps?

Jenny: Their jobs won’t go away; they will change. Our users have biases. And the only way to add bias to our AI agent is by training it with human testers. We have to be those people in order to test their stuff. We can’t say it works as expected because we are not as important as our users. It doesn’t have to work like the product owner expects, it has to work for everyone (including the user) expects and that’s where a lot of people are at fault. I think our job as testers will be to identify this bias and train the AI.

Q: If a manual tester leaves a team, they would be left with a lot of technical debt. Should these companies start relying on automation software?

Jenny: There is always going to be technical debt. There is always going to be something that I don’t tell my co-workers or forget to tell my co-workers. Automated testing is a way to document what we know. I completely subscribe to the school of thought that test cases are a form of documentation. They tell you how the application is expected to behave and the more of that you have, the better. You are always going to hurt when you lose someone.

Q: If there is one product you would like to test, what would it be?

Jenny: A tractor. I grew up on the farm, my grandparents were farmers, and the things you can do with a tractor is crazy. They are out in the elements all the time. There is dirt, sometimes animals live in them, and they still have to work because it’s their livelihood.

Check out Jenny Bramble on Twitter (@jennydoesthings).

Further Reading: