This post is from the Experitest blog and has not been updated since the original publish date.
The Present an Future of Artificial Intelligence in QA
We believe that like cloud computing a decade ago artificial intelligence has the potential to transform entire industries and solve some of our biggest challenges. In this blog post, we are going to talk about AI in the context of QA and testing. We will also address why the subject is at the forefront of tech in 2020 while illustrating some use cases where AI is successful.
Next, we will map the key issues and challenges for QA and testing. So we can see where AI can be successfully applied in terms of testing and QA. We also offer you practical tools to evaluate the available AI solutions in QA and test automation.
What is Artificial Intelligence in QA?
Artificial intelligence is defined as computer systems able to perform tasks that normally require human intelligence. This can be classified as visual perception, speech recognition, decision making, and translation between languages. Simply put AIs perform human intelligence tasks traditionally associated with what only humans can do.
Another explanation is that AI is a system's ability to correctly interpret external data to learn from such data and to use those learnings to achieve specific goals and tasks through flexible adaptation. Again we are talking about skills that we traditionally associate with humans, not with computers
There is one more definition that we see bandied about on the internet. This explains AI as a system that is able to make assumptions test and learn autonomously. Autonomous learning, of course, being something else that we associate with humans more than machines.
One of the major items included in artificial intelligence is machine learning. Within that are many different types of learning. One of which we have been hearing a lot about is deep learning short for deep artificial neural networks. This refers to trying to mimic how the brain works.
Looking at the image above you can see that there are three basic machine learning paradigms. These are supervised, unsupervised and reinforcement learning. Let's break these down one by one.
Supervised learning - Pre-labelled data, input, and outputs that we use to train AI models. For example, if you want to train an Artificial Intelligence to learn the difference between which of several pictures is a cat you have to create a labeled set of data that says which image is and which image is not a cat. It might take millions of images but as soon as the model stars to make sense of the images it will be able to tell the difference between an image that is a cat and one that is not.
Unsupervised Learning - This is used for classification, image classification for sure but also for other types.
For unsupervised learning. We don't need to have label data and this is that could be a big advantage for certain use cases and challenges that we just have don't have that ability to label so many so many pieces of data. Unsupervised learning self organizes and predicts new outcomes. This is usually used in clustering for which you need a lot of data. So in unsupervised learning, you can see the model and understand what is noise and what is clustering. An example of where this is used is in both the segmentation of markets and customers. Unsupervised learning is also often used for spam filtering.
Reinforcement learning - Based on the reward system reinforcement learning means that in order to build your algorithm you need to give it feedback as it does something right or wrong. Reinforcement learning requires a large amount of feedback but once it learns to choose the correct outcomes it can make recommendations like in trading for example.
ANI and AGI
When we talk about Artificial Intelligence as an industry whether it is on the news or in some other form of media, we actually are referring to two separate ideas that are combined. One is artificial narrow intelligence and the other is artificial general intelligence.
Let's take a look at some of the similarities and differences.
Artificial Narrow Intelligence (ANI) is basically a one-trick pony. It can learn to do a specific task like image classification, and do it well. When you try to apply it to another task it is useless.
Artificial General Intelligence (AGI) is about creating an AI that can do tasks that a human can and even perform them better than a human. It is super intelligent and the source of many of the Science Fiction fears of robots taking over the world.
When we look at the breakthroughs that we have seen in recent years, almost all of them are related to ANI. The future that we envision when it comes to AGI which is what we believe all AI to be is tens if not hundreds of years away. That makes ANI reality and AGI more hype.
When we talk about AI in the context of QA and testing, of course, we're talking about artificial narrow intelligence. Now, this is very important because you really have to define your use case very well so that the AI can solve it. You cannot simply turn to your AI and say QA us very important please fix the problem. It won't work that way.
An example of the impact of research
Let's take the example of facial recognition. Facebook researchers took 4 million user-uploaded and labeled images and used them to train their Artificial Intelligence model to recognize faces in order to tell if two pictures were of the same person or not. The AI was able to perform this task much faster than a human and with a similar error rate and sometimes better. When you take into account that this was in 2015 that's a pretty mind-blowing statistic.
The reason it's so important is that as long as you AI is not as good as the human version you cannot replace them. Once your AI is as good as a human then you can replace them.
Commercial Use Cases
There are many commercial use cases for facial recognition.
- Smarter more targeted advertising in social media
- Visually aiding the blind
- Finding missing people
- Preventing crime
- Recognizing VIPs at events
- ATM authentication
These use cases can produce billions of dollars in value for companies. According to McKinsey by 2030 AIs will be attributed to $13 trillion worth of business, which is an incredible number.
Why AI why now?
Machine learning existed for more than 15 years. What has happened in the last five years to drive AI so far forward?
Below you will see a slide by Andrew Ng of deeplearning.ai. On the x-axis, you can see the amount of data and the y-axis is the performance results. We can see that with traditional AI. We don't have a lot of data. This is the level of results that we could achieve.
When we compare it to human intelligence is we see a difference because human touch is well beyond what AI can achieve.
Now, if we have more data thanks to the cloud and big data, and maybe we are also implementing deep learning. The outcome will be better but still below that of human capabilities. What has happened is that over the years processing power has increased, and more computation is available. This helps cloud companies but also smaller labs in start-up companies. They started feeding more and more data in the larger networks until we end up with the level of performance that we have today.
AI Use cases
We have a simple rule of thumb that we can use to determine the best use cases for AI and machine learning.
Machine Learning tends to work well when you are trying to run simple concepts. An example is something that you could do in less than a second of mental thought when there's lots of data available. This is according to Andrew NG who we mentioned above.
If we are talking about a picture of a cat or the ability to tell whether the same person's face is in two different photos then yes these are perfect cases. However, if you want to train an AI to answer emails and you have emails that require a response with a touch of empathy then that would not be a great case for automation.
The functional landscape of AI for QA organizations
There are several main issues and challenges in mapping and testing in QA. You can see a sample of what we mean below.
- Executing Exploratory Testing - This is an application crawler that navigates through your application and explores its boundaries. It allows you to map all the screens and within each screen map all the controls. Possible AI solutions can identify different anomalies within the application, like page load time and UI changes as well as behavioral changes in the functionality of the application.
- Test Result Analysis - It is important to be able to group test results and categorize them. This will help you reduce the time spent on test failure analysis. Categorize test results into buckets like Environment error, Flaky tests, UI change, and Assertions. In many cases, the result analysis effort shrinks by a factor of 10 when you categorize and prioritize tests. This is mainly relevant to organizations that have mature continuous testing practices and execute more than 10K tests on a daily basis.
- Test Execution Optimization - Gain insight on the tests you should execute, based on the code changes you made. Identify the risk level of releasing changes to production given the level of testing you performed. Reduce the number of tests you need to execute to get a given risk level. This is valuable for cases that, are related to the solution under test, as executing a test is expensive.
Artificial Intelligence in QA - Recap
AI is a growing tool in the QA and functional landscape. It is hugely important for your organization in terms of what we should do about it. AI has a huge potential for QA and testing. We discussed instances when it might be less than the ideal solution. In our webinar on the same subject that you can watch here. In that, you will see how we mapped the key functional areas where Artificial Intelligence in QA can bring a lot of value for today. You will also see the new functional areas that are going to be increasingly available in the years to come.