A wise man once said, Only those who do nothing make no mistakes. That is why we devote this post to mistakes often made in load and performance testing projects. This post was written not only for managers who are up to their first testing project, but also for leading engineers who have had many of them and have many stories of their own to tell.
The focus of this text is on performance testing, although there obviously may be many mistakes in any part of a complex IT project.
Want to avoid performance testing mistakes?
Just hand it over to us. There is no better place for a QA solution than PFLB.
Most often, the belief that performance testing is easy to do and does not cost much leads to underestimation of the project’s technical complexity, which, in turn, leads to a great variety of other useless steps.
This belief comes from the fact that many customers do not really understand what performance testing is and confuse it with manual functional tests. This may happen because of developers’ inability to assess the labour costs of performance testing, or their unwillingness to deal with it. Perhaps, the value of performance testing is not that obvious for business, and it looks to them as an opportunity to save some money.
The first issue that comes from this belief is underestimation of the project complexity. The engineer involved in the project may lack necessary skills. Sometimes these skills are practically impossible to assess, even if they offer the most detailed CV. Many modern IT systems can’t be tested by a single person, specifically if the system is growing and developing. Performance testing is a big milestone and it requires a team. There are many different ways to headhunt and teambuild, but not many clients think of it when initiating the project. Well, they should.
Where should you look for a specialist who is an expert on your particular unique system? The answer is short: they don’t exist. Yet. But if you do start performance testing, he or she will come to existence. It does not matter if this person is on staff or outsourced, as after a couple of months of their involvement with your system they will know enough to easily localize the bottlenecks and give advice on moving forward.
Another consequence is lack of funds. If one believes that performance testing and manual functional testing costs are equal, or that performance testing is worth 5 to 10% of development, their testing process will be very limited, or rather nonexistent at all. No system analysis of test results or system defects is possible in such circumstances. Testing models and profiles will lack quality, too. In many situations, results will be useless.
What you should do is plan your performance testing budget. Don’t cut it down. It’s better to do performance tests rarely than not to do them at all. You may need to spend 50% of the development and testing budget on performance increase. Business has to accept the fact that high quality is expensive. No one needs a system that can do a lot, but does nothing. Perhaps, calculating financial losses caused by system performance failure may help.
Don’t want to run tests by yourself?
There is no better place for a QA solution than PFLB.
Drop us a line to find out what our team can do for you.
The tests themselves can indeed take a couple of hours. But getting a result from those two hours requires two months of planning and preparation. If you are not prepared, the results, once again, will be useless. Of course, one can consider working for 14 hours per day without days off, but in such circumstances the team burns out really quickly.
Unrealistic deadlines yield unrealistic test results. Such a situation is highly probable in case you picked the date off the wall instead of connecting it to the project lifecycle. The words “we have two weeks to test it” mean nothing, since a random term will bring random results.
Sensible businesses test their apps performance all the time: the testing process is a routine, as much as development and optimization are. Milestones and deadlines are not a no-can-do, but they should be evaluated based on previous experience.
An adequate solution to such problems is to scale the testing profile gradually, and move forward by iterations. To begin with, test three main business processes. Then, test five, 10, 20, 100, etc. For each iteration, compare test results to those in production, monitor the process, and optimize it. If possible, automate testing environment preparation processes, test reports and their interpretation. To save time and money, you can use geographically dispersed teams. When your developers in one part of the world deploy the release and go home to rest, the testers in another part of the world start testing it immediately, and by the next day’s morning, developers have reports.
Well, there is no way to do performance tests without an environment. So either forget about it for good, or just do it: assemble your testing environment. Even a testing environment that does not equal production in terms of capacity is better than none at all. Of course, you have to aim at 100% of production, but 80, 70, and even 60% would do.
Another way is to try to run tests in production, but it truly is painful and has to be approached gradually and cautiously.
As a temporary solution, you can try and divide your production environment to separate some space and run component performance tests. Microservices are fashionable nowadays, anyway. Otherwise, move your testing environment to the cloud, so that you can scale it anytime you need, but this decision should be made by your system architect. That is, if you have one on staff. If you are, by any chance, planning to renew your hardware, you can pass the old one to the testing team. Cloud services allow rent per hour, if your environment is based in the cloud.
Anyway, your performance testing contractor should offer a solution based on your goals and capacities.
Looking for performance and load testing provider?
Drop us a line. Most likely, we have already dealt with problems like yours in the previous 300 projects.
You may ask why a separate team is required for performance testing. Afterall, developers have developed your system, so they might as well test it. In fact, this only works in fairy tales. Developers are always more driven by their urge to introduce new functions than to test. If they are building the system for the customer, the first thing they want to cut down on is performance testing services.
In case performance testing was never held, if problems arise, they hurt the system owner and their customers. Internal developers may say that the problem is in hardware and use a cover of formal results. The mission of defining that the problem is located in performance without running tests for every release is close to impossible.
To avoid such circumstances, employ an independent, competent and involved testing team that you trust.
The thing is, developers and those who accept and test software should not depend on each other or be affiliated in any way. Them being under the same roof increases chances of mutual pressure. If, on the contrary, an independent contractor tests the system, and the report is somewhat critical, it could be used as an effective argument to make developers do something about productivity problems. Independence is one of the key factors of outsourcing performance testing.
It makes sense, but performance testing is all about reducing the risks. It is never a 100% guarantee, but it does decrease the probability of shooting yourself in the foot. Without performance testing, there is always a chance of productivity decrease in production, and you won’t be able to reverse the changes that have caused it.
Moreover, any IT system that works well is prone to increase with time. New features and functions appear as quickly as shells on the bottom of a pirate ship. Data quantity increases, too. So the fact that your system is working properly at the moment does not mean that in a couple of years this ship will be as fast as it is now. Putting off performance testing is a big mistake: when the ship will be sinking, it’s going to be too late. Instead, performance testing should be regarded as an obligatory stage in your system’s life cycle. Include this stage in any rules or regulations, if you can. And remember: acceptance testing is a must.
This article has not covered lots of minor issues that can emerge in any performance testing project. Performance testing engineers with some experience would agree that choosing the right tool is always an issue, or that ignoring production data leads to surprises.
But the main idea here is that the worst problems lay in the organizational field, not in the technical one. If the customer and the contractor both understand the crucial importance of performance testing, and everyone in the team tries their best, then your systems will run smoothly and flawlessly, your clients will thank you, and your business will not only trust IT, but also rely on it and grow. As a performance testing company with 400+ engineers on staff and 300+ successful projects, we know what we are talking about.