Abstract: This whitepaper aims to equip the readers with the bare essentials involved in Performance Testing. This is a 101 style paper and does not cover Project Management or Technology aspects that play a key role in testing applications for their performance. If you are well aware of the need for incorporating explicit Test Cycles for Performance in your projects, this paper is not for you, otherwise please read on!”
Performance is an essential feature:
In today’s challenging business climate organizations are focusing to get maximum business value from their products. Quality is the mindset and performance is the quality attribute. Human beings mindset is, they cannot wait for longtime. They want everything to be very fast. A person cannot wait for a bus in the bus stop if it is late he looks for an alternative. The order given in the restaurant is served late, if the ride on a bike is slow we hate it .Based on this psychology of the human beings the IT industry focuses on the performance aspect of their products. Performance is a “must have” feature. No matter how rich the product is functionally, if it fails to meet the performance requirements of the customer, user psychology and the system considerations it is branded to be failure in the market
Unfortunately developers don’t take the time to structure their application for great performance. Quite often performance testing is done at the end of the project. Architectural design decisions are influenced by the performance requirement specifications mentioned by the customer. To build the software that “performs” has to be tested in all stages of the software development life cycle. Most of the performance issues can be tackled in the design phase of the SDLC itself by concentrating on the work models in the design phase of the SDLC. IT is certainly true that simulating unrealistic work models can provide valuable information to the performance testing team. But only accurate predictions can be made and accomplish performance optimizations when realistic work models are simulated. In functional testing if we find a problem we can easily figure out how serious it is, not the case of performance testing: usually we have no idea what caused the problem and how serious it is.
Performance testing is done for these reasons:
1. The first reason is to ensure that the system will meet the current and short term projected needs of the business. It is to establish how much performance can be extracted from the system as it exists today.
2. The second reason is to plan for when something must be done in order to support a greater load. To verify the scalability of the product .This may include rewriting portions of the solution, restructuring the solution, or adding more hardware.
3. To identify the system bottlenecks. This is particularly important in high usage applications.
4. To determine optimal hardware and software configuration for the product.
5. Performance testing could be the estimates of various performance characteristics that the end users are likely to encounter when using the application.
Performance testing is (most frequently) conducted to determine whether or not an application will do what it is intended to do acceptably in reality. Identification of existing potential functional errors that aren't detectable with single-user scenarios but can or will manifest under multi-user scenarios. There is no protocol that a website should maintain particular minimum response time. Hence performance testing becomes even more complex.
Constituents of performance:
Performance by itself is not a single word but cohesion of words like speed, stability, scalability and reliability which are the characteristics of performance. This may lead to loss of business .A product though perfect functionally (at least with not many visible errors) is branded as failure when less reliable, as reliability is the key consideration in the world of business.
Dimensions and measurements in performance testing:
The single user load test is accomplished for establishing application baseline performance. If the application performs poorly or breaks at the single user load level, it is not useful to continue for performance testing the application at higher load levels. The response time and the CPU utilization is less for single user. The response time increases as the user load increases. Once the CPU utilization on the application server approaches and hits the 100% mark, any increase in the number of users only results in poorer response times. This should be measured and recorded, as it clearly defines a limitation for the application. This also assists in capacity planning, and in determining the number of application clones that will be required within the clustered environment to support load expectations. Some applications perform better at higher user loads than others. The poorer response times and CPU utilizations of the applications show that they are suffering from the bottlenecks.
To better simulate the real world, not only should the load size be estimated, but also a profile of the users/ activities that make up the load and getting the right mix needs to be created. The different load profiles would be the user activities, think times, usage patterns, client platform, client preferences, client internet access speeds, background noise and the user geographic locations.
Key measurements from the end-user perspective:
The user do not care for the throughput, response time, bandwidth or hits per second prove or do not prove, they only care for positive user experiences otherwise the user is annoyed. If a site takes 8 min and another takes 3 min definitely the users flock to the product which would respond in less time or perform better. Speed is the key measurement from the end-user perspective.
The second measurement would be the availability and accuracy of the response to a request accomplished by the user. For example downloading a file or a document and so on.
Thursday, September 13, 2007
Subscribe to:
Posts (Atom)