Pushkar and I were two recent PhDs with an idea for mobile game testing. We had an idea, but no customers, and no clue about the demand for such a service. Our first step in the discovery process was to share our thoughts, visions, and knowledge within the app testing space as a way to start becoming thought leaders in this growing marketplace. Along the way, we discovered three systemic issues that led to the creation of MoQuality as an enterprise mobile app testing provider. This article is the story of our early entrepreneurial journey.
Our First Speaking Gig
“Becoming a thought leader” is not easy, especially when you are just building an idea, and trying to create a business. We applied to a lot of conferences as speakers to discuss our ideas in mobile testing. One of those conferences was Google’s Testing & Automation Conference, to which we were invited to speak. Our topic for the session was “Using Robots for App Testing”. During that talk, we got a lot of solid questions from industry leaders and practitioners, and the session led to another invite.
Huawei, the largest mobile device manufacturer in China, invited us to give a talk at their internal test summit. Huawei customizes the Android operating system to run on Huawei phones. To ensure proper quality control over the phones they had to figure out how to test their modifications, including device drivers. Among other approaches, one way they wanted to test was using games as a proxy. Games are fairly resource-intensive apps and they tend to use most mobile device features (GPU, sensors, etc). The idea of creating an AI that can play mobile app games while they test and verify the phones themselves appealed to Huawei.
The First Opportunity
Another reason to choose mobile games as the means through which to achieve this validation is that games are the most popular apps in the Google App Store and Huawei App Store in China. If a game uses a specific hardware feature and that feature has a bug, the game will crash, exposing the bug in the Huawei device. Thus, by analyzing and testing mobile video games, Huawei could test its hardware and custom Android OS software. Mobile games such as Flappy Bird and Super Mario became the proxy for device testing.
How big was this opportunity? For two PhDs from Georgia Tech, it was exciting! We had a contract and the money was waiting. All we had to do was deliver our idea to this global mobile device manufacturer! But please read the first paragraph of this article again, and remember, we only had an idea. We didn’t know how to actually deliver what we were talking about.
First Challenge - Working with a Black-box
As it turns out, creating an AI that can plays mobile games is challenging. In comparison, AI for testing games on a desktop is slightly easier. For example, if you take the case of DeepMind’s ATARI games Q-Learning agent, the AI agent could hook into the game state and find out the score. In mobile games though, you can’t hook into the app. The first task was to design an AI agent that can learn and understand the state of the game using computer vision. Then use Deep Q Learning to design AI agents which learn and play mobile games.
Challenge #1: Testing mobile games without access to the game state. We call this approach “black-box” testing.
AI Requires Heavy Computing Power
Machine learning and the resulting AI have very heavy GPU requirements. Mobile phones have limited hardware and weak GPUs, so we could not run machine learning on phones themselves. This understanding, though not new, did become our second challenge. We worked around this problem by running the AI testing system in a distributed fashion, where we ran the AI system on a deep learning workstation, connected to and interacting with the phone that was running the game.
It turns out that our system was far more effective than we could have imagined. Keep reading to see what happened when we tested the most popular version of Flappy Bird.
Cloud Computing isn’t for Everyone
Because of the technology available to anyone, especially tech entrepreneurs, we sought to utilize the deep learning capabilities that are already available in the cloud. We quickly learned that even as powerful as those utilities are, that computing power cannot overcome the latency associated with processing commands from the cloud to a mobile device. You see, mobile games operate in real time, so if action is not instantaneous, the state of the game is no longer valid. What that means is that the mobile game character will die, you will lose the game, etc.
Challenge #2: This means we could not connect phones to cloud AI resources because latency was a huge issue. We would have to deploy our own deep learning hardware. But we didn’t have any, and our National Science Foundation grant allowed spending only on R&D and did not allow us to purchase hardware for the business. In other words, we didn’t have any money to buy the solution to the problem that we had solved for the world’s largest Android mobile device maker.
Buying The Big Machines to do The Job
We did not allow the lack of money to be another obstacle. Instead, we began researching and learned that NVIDIA makes a deep learning workstation, and we could order one immediately. Unfortunately, the workstation cost $15,000 and takes two months to deliver. We kept looking and found Exact Corporation. They could assemble a deep learning box in a week and ship it to us, which would take several more weeks because shipping something worth that much money is slow and expensive. But we didn’t have time to wait even that long, so we ordered the machine using my personal credit card, and flew out to Exxact Corporation in California to bring it home on an airplane. We checked the exterior (non-crucial parts) and we carried-on the sensitive parts (4 NVIDIA GTX 1080 GPUs) on the airplane. Problem solved!
Putting The Machines to Work for Training The AI for Game Testing
We had not yet actually proven that our idea could work, despite learning how to overcome three major obstacles to our idea, spending $15,000 of borrowed money, and flying across the country to buy a computer!
We started out by reproducing what other researchers had done already. We reproduced the open source version of the flappy bird on a desktop using Pygame. That worked easily enough, but we had to make it black box and port it to a mobile device. We used some open source code and wrote some of our own custom code to set up a bridge from the mobile phone to the deep learning workstation so that the workstation could directly access everything going on in the phone.
At this point, we were doing machine learning on the workstation and sending commands to the phone via the USB connection. It took us about a week to build this system. Then we had to train it on the specific game. We chose Flappy Bird, the most popular game on Google Play at the time. Read here about how we used Deep Reinforcement Learning for this game testing.
We set it going on Flappy Bird, and then left it running all night. When we came back in the morning, the game was still going - meaning the “player” (our deep learning workstation) had not died - but there was no bird on the screen!
We found a bug that no human could find. Our AI had figured out that the bird can fly above the pipes. The AI machine tried lots of different strategies, found this bug, learned it, and did it. No one else has reported this as a problem on the most popular version of Flappy Bird on Android at the time. After this amazing discovery, we connected to Super Mario and other popular games and quickly learned that we had created a workable solution to Huawei’s very specific problem.
But Huawei Does Not Make The Apps
While we solved the various challenges involved in performing AI testing on a mobile device, we also discovered one problem that we could not overcome for Huawei. We had built an app testing system for Huawei, but Huawei does not make apps or games. We had solved a problem for one customer, but not for industry. So, we continued on with this customer while continuing to do customer discovery within the mobile game industry.
Challenge #3: When we did speak with game developers, we learned that most of them don’t make any money. A few games make all the money, and they hire an army of testers. We realized it would be very difficult for us to compete with that army of testers. So we continued on with discovery and changed our focus away from mobile game testing using AI.
Eureka! Enterprise Mobility
That’s when we learned that mobile testing for enterprise apps is an even bigger problem than mobile game testing, and it’s repeatable. Enterprises need far better testing on their apps, because mobile computing is a large channel for enterprises, and mobile drives a great deal of traffic and revenue. The better the testing, the better the user experience, and the more users these enterprises will earn and have direct access to through their mobile applications.
As for what is next on our journey, we promise to keep you well informed of our continuing journey, and we will also promise to tell you every time our AI discovers a fun new bug or easter eggs in the most popular mobile games in the world.
From presenting an idea to raising a toast to our seed funding!