App Promotion Strategy

How to get test automation right – Questions, Answers and Predictions

keyword promoter service

How to get test automation right — without starting from scratch

Test automation is easier said than done – and it requires maintenance. There are many different ways an organization can struggle with automated testing or fail completely. However, those who use the technology correctly have gained a lot in terms of test efficiency and coverage.

In a recent webinar Your Test Automation Questions Answered, expert Jonathan Zaleski, Senior Director of Engineering and Head of Labs at Applause, tackled a variety of issues pertaining to test automation. Questions ranged from common approaches to automation, to what it means for testers in their careers over the long haul, and more.

Applause Labs works on innovative solutions to challenges in testing, ultimately to deliver better value to Applause customers. As part of his team’s work developing Applause Codeless Automation (ACA), Zaleski aimed to address a number of QA pain points, such as device coverage and duplicative script maintenance work. In the webinar, he answered test automation questions to dispel some myths and help organizations take a strategic long-term approach to the task.

Table of Contents

Dig into these test automation questions and answers from our webinar.

Some of this transcript has been edited for clarity.

What is the biggest mistake that organizations often make with test automation?

Jonathan Zaleski: There’s a few different ways to answer this. In some regards, I think, it’s about not thinking about test automation early enough. It’s similar to internationalization or accessibility in this regard. So, you may develop a system that may have a plethora of features and then you decide you want to test it. That’s ultimately time that you’re going to have to go back and spend to make your app testable. This could potentially be a major time sink, and could actually be somewhat difficult.

Another common thing could be not leveraging the various types of testing correctly. So, in the testing practice, there are a variety of different approaches. Some are focused a little bit more on the aesthetic; some are focused more on automatability of APIs or unit tests, or functional tests. Leveraging the right tool for the right job is important. To that regard, that could be another friction point. Then, potentially, if you can’t get your testing practice off the ground, it could be based on a lot of different things. It could be a delusion that bugs are simply not going to happen, which I can say, after working in a quality business for more than four years and a career of much longer, they’re always going to happen.

What are the practical limits of scripted automation?

Zaleski: There’s a few different ones that I’ll highlight today. Test cases [that] require coordinating between systems can be difficult to orchestrate in a scripted automation solution. Take, for example, you need to trigger some workflow, and it’s going to send you a text message, or some other push notification that you have to agree to in the app. Most of the devices that you’re working with typically don’t have dedicated phone numbers or the capability to do that thing. Some of the tooling that’s out there does make this possible, but it’s not super easy. So, that can make things maybe not a great fit, but it is something that you can work through.

An app or even a website for this matter that doesn’t behave particularly deterministically can also be difficult. Having dynamic streams or dynamic workflows that do different things, either behind some feature flag framework or something else, that can be hard, because your hope, since you’re establishing effectively a test case, “Do this, then this, then this, then expect this result,” if the ground truth is changing, it does potentially mean that you’ll have unexpected or maybe expected failures. So, that’s certainly one part of it.

Similar to the coordinating between systems, things that require maybe something even a little bit more disconnected like multifactor [authentication] — so you have that thing that is rotating a number every 30 seconds on your phone, or back in the old days, you had a digital device. Being able to do that can often require a level of coordination that’s just not super easy to facilitate.

Then lastly, I think apps that are time-sensitive can be particularly difficult to test. So, for example, you could be potentially navigating through some banking application, scrolling down, clicking on a thing, checking your balance and making some assertion based on all the preconditions being correct. But, for testing something like Flappy Bird, I click and my guy goes up in this [amount of] time, or testing the accuracy of a video on Netflix, it’s pretty difficult if not nearly impossible to facilitate those things certainly within the standard web and mobile automation toolkits.

Let’s assume that your team is not getting good results from test automation. What is the best way to audit the problem, or should you just entirely start from scratch?

Zaleski: It can be very tempting as with most things to just throw it out and start over, but that’s pretty much always not the right answer. So, I would say for this one, it really does depend. As with most forms of testing, step one should probably be to ensure that the issue is repeatable. This will make it easier for all the parties involved to both reproduce the issue and know when it’s fixed. Additionally, confirm that you’re testing the right things. Maybe there are a handful of scenarios that are really core to your application that you need to work 100% of the time. Make sure that’s what you’re testing.

Related to that, I would say that 100% test coverage, as it were, it’s somewhat of a myth. To some degrees, it can be somewhat of a blocking requirement. Sorry, rather, if it is a blocking requirement to releasing your application, it could mean potentially that your product is never going to get to market. So, I think it’s really important to make sure you’re challenging your assumptions about your test suite, making sure you’re attacking it from all the right angles, and hopefully diagnosing problems in the now with the authors of the test and the team doing the testing.

What exactly is codeless test automation, and what are the limits to that?

Zaleski: Believe it or not, the functional limitations are basically identical to the scripted automation that I mentioned before, though there can be additional issues in a few different areas. Given that it’s not a code-first solution, you will find in ACA and other tools like it that the timing is very important — basically, the speed at which you’re going through each of the actions. This can be difficult because occasionally, there could be a slower API or just a slower response from something that is providing the data to the application, which could make your test unintentionally fail.

Additionally, and I think I’ve talked about this previously on best practices around readying your app for testing, there could be the potential that we are not able to calculate a good locator for your application, or, rather, what you’re interacting with in the application. You can solve this problem in a variety of ways, but this is similar to, as we started to discuss in the first point, this is something you want to be building in from the beginning for your app.

Then, lastly — at least these are the ones that immediately came to mind — there could be a potential issue with web views especially in mobile applications. For most of the automation tools or automating tools like Appium that are on the market, this requires you to jump through a little bit of hoops: changing the state of your application to think about those locators a little bit differently. Inside of that, there can also be all failure modes for things not lining up quite right or things being inconsistent, et cetera.

What does test automation mean for a tester’s job and future?

Zaleski: So, largely, I think it’s a very good thing. It means that testers, generally, won’t need to repetitively execute the same test case release over release. This might not be great [for some people] because maybe somebody is depending on that work within our community, or within any testing community. But it’s actually better [overall]. So, it allows the team to focus a bit more on break fix, and at least in our community, you could potentially be paid a bit more for creating the automated test case versus the manual test case that you execute every time.

To take that a little bit further and talk about the ease of maintenance, our system and others out there do have the ability to compose together reusable bits of scripts, we’ll say. So take, for example, if your login script, which is a very common test case — log in with this username and password and assert that I saw “Welcome, John” on the landing screen — if that ends up failing, it means, certainly in a codeless solution, if you’re using that as the foundation for all of your scripts, it allows you to fix it once.

So, as a result of that and sort of the determinism of everything else in your test suite, you can use that both as a fail-fast mechanism, so you probably don’t need to execute test case two, three, four, five and six, if test case one, which is the login script fails, because the precondition could be that you’re logged in. Additionally, you can fix it once and in one place and affect all of those other scripts, so you can have less time to be able to fix the issue. But, ultimately, I’m alluding to this with all the points that I just mentioned, I think it does free up testers in general to work on bigger and potentially more interesting challenges.

What do you find to be the best way to share knowledge of test automation within the organization?

Zaleski: I think this heavily depends on how your organization disseminates information to some degree. Certainly as part of the standard sprint processes, it could be something to talk through during a retrospective, or some demo, or some other meeting entirely. For some folks that don’t follow that practice to the tee, it could be just a one-off meeting maybe every quarter to get an update from QA to talk about how they’re using high-level tools that are out there, or how they’re using their communities, and talking about the overall testing approach.

A testing plan, in general, is a cross-discipline concept. I think reviewing it early and often, offering opportunities for cross training, and maybe even possibly using a higher-level tool like ACA will allow you to share this information more easily and make it more approachable. You would find that in our tooling and in our test cases, in general, they’re very self-documenting. They’re very easy to understand. Even without all of the context of everything that goes into it, they’re pretty self-navigable.

What is in store for the future of test automation?

Zaleski: That’s a big one. So, I have a few different predictions and some of them are based on experiences that I’ve had over the last year-and-a-half-plus and earlier on in my career. Some of them are a little bit more related to tooling like ACA. But, first and foremost, I do expect to see more high-level tools that make test authoring and maintenance easier and more inclusive, or approachable to all users becoming something standard in the marketplace. I do see the continued evolution for tools that are capable of detecting and authoring AI-driven automated tests.

So, for example, one early-on project that we did in [Applause] Labs was evaluating the interactions and being able to put out a recommendation, like “this is a login script,” or “this is a script that is adding an item to the shopping cart,” or so on and so forth. I think the application of AI in this space is pretty new, but I see it as a very powerful tool in our tool belt.

I do see a potential for more screenshot-driven solutions. Appium, for example, does support this with a plugin. This allows you to do cross-OS product testing. Now, in our experience, that hasn’t been something that there’s really a market need for, but what I’m saying here, is [a solution] allowing you to record a test on Android and replay it on iOS. There’s a lot of technical reasons why this is difficult, and it is something ultimately that might be aided by the AI solution. But, again, it’s something that may not come to bear, but could be something that we do see pop up.

I also see the importance of cross-device testing being all the more important. With ACA, we built in the ability to record once, run many. What I mean by that is, you could record a test case, say on an iPhone XS on some particular version of iOS, and you could replay it on some other iPhone, also running on an XS, but potentially a different platform version. You could potentially run it on an iPhone 11, 11 Pro, 12 Pro and so on. This is something that does pay into the maintenance cost, and it does allow you to focus more on, really, a depth of testing as well as breadth at the same time.

Lastly, while we were building ACA, we did integrate with a few different device lab providers. This can be important both to us as well as to our customers, because it allows you that diversity of devices that I mentioned previously. Maybe your app has a different target audience. Maybe you’re focused on [the fact that] 90% of your users for whatever reason are all on iPhone 8. Well, you’re going to want to find a device lab that has those devices for you to test on. Certainly, you could decide to bring that expertise in-house. But I can say, on the other hand, in talking with each of these device lab vendors — again, some very well-known ones — the cost to maintain something like this is very painful. It’s costly from a capital perspective as well to get off the ground.

Leave a Reply