How Experitest and T-Systems Remove the Mobile Testing Bottleneck
Digital transformation is not just hype. It is a reality. Companies today can either proactively adapt and harness the power of technology to unleash new growth opportunities, or they can talk themselves out of it and watch their competitors take the lead. It's true that technology has its pluses and its minuses. Companies grow by leaps and bounds using technology as a strategic lever. They also face novel challenges in the process of implementing it. Nevertheless, digital transformation is changing the way we conduct business around the world. This has become so true that we can say that every company can be called a technology company. Companies today depend heavily on technology to transform their key processes, drive customer experiences, and improve their service quality. This technology extends into the realm of app development and mobile testing.
The mobile experience is a large part of this transformation. Consumers are spending more time online and on smartphones than on any other devices. This indicates that mobile applications are no longer a matter of choice, they are a necessity. Research suggests that 71% of businesses consider "revenue growth" and 68% of businesses consider "improved customer experience" as their top business priorities. These business priorities translate into AD&D actions such as "improving online customer experience", "adding or improving mobile customer experience", and "accelerating the commercialization speed of new products launches."
What is even more important to note is that as the as the level of AD&D actions increases due to an increased focus on revenue growth and customer experience, a growing need for testing automation is also realized in tandem. Quality control in development environments is gradually reaching a huge scale. Enterprises no longer consider testing a one-time process. Constant releases require consistent quality control and checking, creating a testing bottleneck that demands novel solutions.
The Testing Bottleneck
Testing bottleneck arises out of the need for continuous changes in products or new releases. The more continuous the changes and new releases, the more frequent the need for quality testing. The problem, however, is, all too often the testing team comes in towards the end of the production cycle. This slows down the process and creating a testing bottleneck. The challenges are indeed daunting, but there are ways to effectively tackle them. At Experitest, we recommend the following best practices to speed up your testing process and release the bottleneck.
- Create a test environment which is simple and requires low maintenance
- Ensure that your testing system covers a wide breadth of technology, including application types, operating systems, platforms, and device manufacturers
- Deploy intelligent, agile, and continuous delivery practices to achieve scalability in testing
- Expedite feedback process and make data-driven decisions a norm
- Embrace DevOps approach to speed up the testing process, increase the frequency of application testing, and minimize manual interference in code testing through automation
T-Systems: s.Oliver as a Case in Point
s.Oliver is a German fashion company with a successful webshop for online fashion shoppers. They approached T-Systems, one of the world's leading providers of digital services. The brief was to help them develop a Shopping App tailored to their online mobile customers.
s.Oliver Fashion App: Situation and Goal
s.Oliver has already had a webshop up and running with a backend set up as well. Their goal was to create a native app (based on iOS and Android) and link it to the existing backend. The sprint period was as short as two weeks, requiring T-Systems to deploy a highly agile and efficient approach. After the launch, s.Oliver wanted the new App updated every 4 weeks. Similarly, webshop releases were expected as often as every two weeks as well. Given s.Oliver's specific situation, the project team at T-Systems set the following specific goals:
- A very controlled QA process
- A highly efficient native app that supports frequent shop releases
- A minimum 4.1-star review from the customers who are willing to try the app out
In addition, T-Systems wanted to deliver a quality app by integrating a rigorous QA right from the beginning. Given the short sprint period, it only made sense to deploy an approach which was highly flexible and agile in nature. Code changes were routinely integrated into the main branch of the repository. Test automation helped test changes as early and often as possible. In the final stage, efficient testing on backend changes ensured smooth and seamless integration with the existing webshop.
Approach to Mobile Testing
First, T-Systems ensured a high level of coordination between the R&D team and testers on key quality metrics and testable processes. The full suite of Automated test cases ensured that QA aligned with the Continuous Integration cycle. Second, to mitigate the shortcomings of either method, T-Systems added both manual and automated testing. This helped to create more synergetic testing and build quality into the app. Third, they used a centralized hub of real iOS and Android devices in the mobile testing process. These devices (over a 100 in number) were accessible to code and could support any browser to ensure they are ready for real-time testing. The design of the entire approach made effective use of automation in mobile testing and reduced the overhead cost.
As regards the automation tools, T-Systems primarily considered SeeTest Continuous Testing Platform to perform various tests on s.Oliver's new app. In their view, SeeTest is a dependable automation tool that allows them to perform web and mobile app testing at scale. Since it's also integrated with Appium and Selenium, the tool enables them to execute automated testing against 2000+ devices and web browsers.
To reduce the overhead cost, T-Systems used Build Servers and broke down all the activities into "Nightly" and "Daily" processes. During the night, apps are built, deployed to the mobile device cloud, and run through various automated tests. During the day, manual testers access these newly built apps on the cloud and perform manual testing on them. Alongside, automation experts evaluate the results of the tests run by automated tools during the night. And should they find any issues, they create bug tickets to get them addressed on time. Overall, the entire process orchestrates automated builds, tests, and deployments into a single release workflow which helps increase productivity, enhance quality, and reduce overhead cost.
T-Systems believed (correctly) that automation reduces overhead costs. They approached the model by dividing the process into goals (things to achieve) and decisions (ways to achieve them). This helped them build and deploy easier to write, read, and maintain automated tests. The goals were the following:
- Maintainability: Tests should support changes in requirements and in system implementation at any later stage.
- Reliability: Tests should not skew/misrepresent facts or yield false results, and it should not break due to minor issues like some spelling mistakes during the log-in process.
- Expressive Test Results: For example, it should not report an error in vague terms. Instead, specify exactly why the test failed when it failed.
- Remote Accessibility: To allow it to be executed on the build server and tested across a wide range of devices, operating systems, and web browsers.
Once the above goals were set, for s.Oliver's Mobile App, T-Systems made the following support decisions:
- Use Java: Since it comes with better control (over things like Assertions) and helps introduce effective Abstractions that are necessary for the test's maintainability.
- Locate elements with XPath: XPath allows to create unique identifiers for various objects that can be located in XML. Leveraging XPath means making the test more reliable and robust.
- Explicit Assertions on various levels: To help make the test results highly assertive.
- Run continuously on Build Server: As it enables and supports the "nightly" and "daily" model of operation which is automation friendly.
- Leverage central device lab: Use the central device lab to remotely access and run tests on a range of real devices.
Automation Approach: Page Object Pattern
To achieve the automation goals, T-Systems used a software design pattern called Page Object Pattern. Page Object Pattern is a standard pattern used in the Selenium world. It was adopted by T-Systems and runs automation tests on s.Oliver's mobile application. What this pattern essentially does is introduce some layers of abstraction so that we can modify the application without necessarily having to modify the test script.
The test script contains a broad logic, roughly equivalent to the test case as specified in the beginning. Most of the test script (about 79% of the code) that goes into the application is independent of operating systems. Which is a very good thing from a maintainability standpoint? Any necessary changes you make to the automation testing in a later stage apply to the entire application regardless of the OS. So, it saves precious time and resources that, otherwise, would be spent on updating codes and applying retests.
Since T-Systems deploys SeeTest to automate their mobile testing, running a test on any new changes or releases connects the application to the SeeTest's Digital Assurance Lab. A new interface containing grid notes pops up, showing all the available devices and tests that run on the device grid in parallel. When a test run concludes for a device in the central device lab, the system generates a report that highlights test results.
In conclusion, the challenges we face in the process of automation and mobile testing are varied. They also point towards a more efficient future. With agile and continuous delivery practices in action, we have already come a long way in speeding up release cycles and increasing productivity. Now is the time to build on our creativity and explore ways to address issues that are affecting testing productivity. The steps illustrated above are just a start. And we can't be satisfied with where we are now. We must be willing to take a hard look at how things can be improved in the future.