Exciting things are happening at MoQuality. Every day, more people are discovering the advantages of testing their applications on Barista. By automating the testing process, we are helping developers run more tests across more devices, identify and fix more errors, and save countless hours of their time.
So I thought this would be a good moment to look back at the first research prototype of Barista. Although we have come a long way and are still constantly working on new improvements to the platform, many of the foundational principles have remained the same: We wanted to build a tool that made testing applications simpler, faster, and more effective.
The seed for MoQuality was first planted when I was in graduate school at the Georgia Institute of Technology. I found that, although testing is extremely important for mobile applications, the testing process itself was tedious and error-prone. This was especially true for Android applications, which must be able to run on a range of Android devices and operating systems.
To solve this, I began by looking at how developers were currently testing their applications. Most developers encode actions and expected results in a testing framework, such as Appium or Google’s UIAutomator or Espresso. These methods are time-consuming and require testers who know how to encode frameworks in a platform-independent way.
I wanted to develop a more efficient testing method. Working with my colleagues Mattia Fazzini, Eduardo Freitas, and Alessandro Orso, we began developing a solution.
For any method to be successful, we knew it would have to simplify the experience of testing applications across the entire Android ecosystem, while also remaining reliable and fast. To do this, we set out to build a solution that would let testers easily create test scripts, regardless of platform, as well as automatically run them on multiple devices and operating systems.
We accomplished this in several ways. First, we kept the testing process as simple as possible by allowing testers to interact directly with an app and automatically recording their actions. If they want to specify the expected results of any action or stop the recording, they can use a floating accessibility menu that exists separate from the app. When they have finished testing, all of their recorded actions and expected results are automatically recorded into a general test script that can be run on any platform. This script can then be used on any physical device, operating system, or even emulator.
Because we used the Espresso framework as our initial standard format, we combined all of this functionality into a single tool that we called Barista.
The Barista testing method my colleagues and I developed made the testing process for Android applications much faster and easier by automating previously time-consuming tasks, including the need to build separate test scripts for different devices or platforms. A quick look under the hood, however, reveals numerous other benefits:
Support of test oracles : Although these mechanisms make it easy to determine whether an application has passed or failed, most other testing approaches have very limited support for them. Barista not only lets testers create them, but allows them to do so without any specialized knowledge or skill. Very little additional training is required.
More robust tests : Because of the efficient way Barista automatically generates and encodes test cases, they are less likely to degrade due to changes in a user interface. This makes them ideal for regression testing.
Minimally intrusive : Unlike other testing mechanisms, Barista does not need to affect the application being tested at all. It accomplishes this using Android’s existing accessibility features. To use it, testers just have to install the app on the device they are running the test, enable the accessibility framework for it, and start recording.
Although we have since introduced many improvements, I thought it could be illustrative to take a look at how the original version of Barista worked. Its basic functionality can be divided into three stages: recording, generation, and execution.
The recorder can be used to do three things: access the app’s user interface, process any user interactions, and assist the oracle definition process. Using Android’s built-in accessibility features, the recorder listens for two types of events: those that describe a change in the UI and those that are the result of user interaction. The recorder will store the type of event, identify any UI elements affected by an interaction, and collect any other relevant information along the way.
The recorder functions in much the same way when processing test oracles. It begins by storing the type of oracle, then will identify any UI elements associated with it and save additional information associated with the oracle, such as an expected value for a field.
In this stage, Barista generates a test case that faithfully reproduces whatever actions the user performed in the recording stage. We can divide the content of a generated test case into two parts: a part that prepares the execution of the test case and a part that contains each step. The first part is needed to load the starting activity of the test case, as well as to align the starting point of the recorded activity with that of the test case.
The second part is generated by processing the user actions in the recording and writing a single statement line for each. By mapping each action to a single statement, we are able to improve the readability and understanding of the generated test cases. Each statement consists of a selector, which retrieves the affected UI elements; an action, which is performed on the UI element the selector identified; and a parameter; which controls how the action is carried out.
In the final stage, Barista takes the information produced from the previous stage and runs a test case on whatever list of devices that have been specified. In doing so, this stage performs three main tasks: it prepares a device environment for the test case execution, it executes the test case, and it generates a report.
During the preparation task, the execution engine installs both the application being tested and the generated test case on the specified devices. Once this has been set up, the execution task launches the test case on each device at once. From this point forward, each UI update is synchronized with each step in the test case, ensuring that they can be checked against the other. If the device displays a UI element not referenced by an action, the test case will end in an error.
Once the execution is complete, a report is generated that contains the outcome of each test on every device, the time the tests took to complete and debugging information for any errors or failures.
After developing Barista, we measured its effectiveness by performing user studies with 15 human testers and 15 real-world Android applications. We were excited to uncover results that proved how much more effective Barista was than comparable testing methods.
Consider the following numbers:
20.5 percent increase in the number of recorded test cases
37 percent increase in recording speed
97 percent success rate when executing test cases
99 percent compatibility rate across devices and applications
Although Barista originally ran into some limitations, such as applications that rely on bitmapped elements (rather than UI), it repeatedly outperformed other methods when running natural language test cases. The feedback we got from developers during these initial studies reflected this success:
“I have been looking for something like Barista to help me get into automation for a while.”
“Overall, a very interesting tool! For large-scale production apps, this could save us quite some time by generating some of the tests for us.”
Since first developing this research concept, we have continued to improve Barista so that it can be used on as many mobile applications and devices as possible. To that end, we’ve developed a more powerful desktop version that dramatically extends the functionality in new and exciting ways.
How? In our next post, we’ll talk just about that!