Thursday, September 13, 2007

Indispensible aspects of performance testing

Abstract: This whitepaper aims to equip the readers with the bare essentials involved in Performance Testing. This is a 101 style paper and does not cover Project Management or Technology aspects that play a key role in testing applications for their performance. If you are well aware of the need for incorporating explicit Test Cycles for Performance in your projects, this paper is not for you, otherwise please read on!”

Performance is an essential feature:

In today’s challenging business climate organizations are focusing to get maximum business value from their products. Quality is the mindset and performance is the quality attribute. Human beings mindset is, they cannot wait for longtime. They want everything to be very fast. A person cannot wait for a bus in the bus stop if it is late he looks for an alternative. The order given in the restaurant is served late, if the ride on a bike is slow we hate it .Based on this psychology of the human beings the IT industry focuses on the performance aspect of their products. Performance is a “must have” feature. No matter how rich the product is functionally, if it fails to meet the performance requirements of the customer, user psychology and the system considerations it is branded to be failure in the market
Unfortunately developers don’t take the time to structure their application for great performance. Quite often performance testing is done at the end of the project. Architectural design decisions are influenced by the performance requirement specifications mentioned by the customer. To build the software that “performs” has to be tested in all stages of the software development life cycle. Most of the performance issues can be tackled in the design phase of the SDLC itself by concentrating on the work models in the design phase of the SDLC. IT is certainly true that simulating unrealistic work models can provide valuable information to the performance testing team. But only accurate predictions can be made and accomplish performance optimizations when realistic work models are simulated. In functional testing if we find a problem we can easily figure out how serious it is, not the case of performance testing: usually we have no idea what caused the problem and how serious it is.

Performance testing is done for these reasons:
1. The first reason is to ensure that the system will meet the current and short term projected needs of the business. It is to establish how much performance can be extracted from the system as it exists today.
2. The second reason is to plan for when something must be done in order to support a greater load. To verify the scalability of the product .This may include rewriting portions of the solution, restructuring the solution, or adding more hardware.
3. To identify the system bottlenecks. This is particularly important in high usage applications.
4. To determine optimal hardware and software configuration for the product.
5. Performance testing could be the estimates of various performance characteristics that the end users are likely to encounter when using the application.

Performance testing is (most frequently) conducted to determine whether or not an application will do what it is intended to do acceptably in reality. Identification of existing potential functional errors that aren't detectable with single-user scenarios but can or will manifest under multi-user scenarios. There is no protocol that a website should maintain particular minimum response time. Hence performance testing becomes even more complex.

Constituents of performance:

Performance by itself is not a single word but cohesion of words like speed, stability, scalability and reliability which are the characteristics of performance. This may lead to loss of business .A product though perfect functionally (at least with not many visible errors) is branded as failure when less reliable, as reliability is the key consideration in the world of business.

Dimensions and measurements in performance testing:

The single user load test is accomplished for establishing application baseline performance. If the application performs poorly or breaks at the single user load level, it is not useful to continue for performance testing the application at higher load levels. The response time and the CPU utilization is less for single user. The response time increases as the user load increases. Once the CPU utilization on the application server approaches and hits the 100% mark, any increase in the number of users only results in poorer response times. This should be measured and recorded, as it clearly defines a limitation for the application. This also assists in capacity planning, and in determining the number of application clones that will be required within the clustered environment to support load expectations. Some applications perform better at higher user loads than others. The poorer response times and CPU utilizations of the applications show that they are suffering from the bottlenecks.

To better simulate the real world, not only should the load size be estimated, but also a profile of the users/ activities that make up the load and getting the right mix needs to be created. The different load profiles would be the user activities, think times, usage patterns, client platform, client preferences, client internet access speeds, background noise and the user geographic locations.


Key measurements from the end-user perspective:

The user do not care for the throughput, response time, bandwidth or hits per second prove or do not prove, they only care for positive user experiences otherwise the user is annoyed. If a site takes 8 min and another takes 3 min definitely the users flock to the product which would respond in less time or perform better. Speed is the key measurement from the end-user perspective.
The second measurement would be the availability and accuracy of the response to a request accomplished by the user. For example downloading a file or a document and so on.

Sunday, June 3, 2007

Exploratory testing

"He is a skilled tester, hence he might be a good exploratory tester."

Exploratory software testing is a powerful and fun approach to testing .The plainest definition of exploratory testing is concurrent learning, design, execution and recording.Exploratory tests, unlike scripted tests, are not defined in advance and carried out precisely according to plan.

Exploratory testing is known as informal testing but it is formally informal method; analyze the outputs and give possible inputs to get that particular output. There are situations where exploratory testing is done without requirements document, rather test cases are written while exploratory testing and transformed into requirements document.

Exploratory testing is an approach that is extra suitable if requirements and specifications are incomplete, or if there is lack of time. This doesn’t mean that it cannot be used otherwise. It compliments or augment other; more formal testing. The main advantage of exploratory testing is that less preparation is needed, and finds serious bugs quickly which are not surfaced by the formal test cases. It is also known as “on the fly”.

The method can serve as a check on the test process, to help ensure that the most serious defects are found. It is common to perform a combination of exploratory and scripted testing where the choice is based on risk.

An example of exploratory testing in practice is Microsoft’s verification of windows compatibility.

Drawbacks:

1. Disadvantages are that the tests can't be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run.

2. When repeating exploratory tests, they will not be performed in the exact same manner, which can be an advantage if it is important to find new errors; or a disadvantage if it is more important to know that exact things are functional.

3. Exploratory tests are not the suitable candidates for automation.
While all exploratory testing is based on the knowledge of the tester.

The knowledge that the exploratory tester already has varies from person to person, just as with any sort of knowledge. Example, some testers have a great deal of familiarity with the application or type of application that they’re testing. Some testers base their tests on a sense of risk and their knowledge of how products can fail.

Bug taxonomies help the new and in experienced testers in doing exploratory testing efficiently.

Sources:

James Bach satisfice.com
Ashwin palaparthi
Cemkaner

Contents of a good bug report

“The best tester isn’t the one who finds the bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs’ fixed.”--Cemkaner

Problem report is used to communicate effectively with the programmer. If the reports are not clear and understandable, bugs won’t be fixed. The point of writing problem report is to get bugs fixed. As soon as you run into the problem in the software, fill out a problem report form.

Contents of a good bug report:

  • Programmers dismiss reports of the problems that can’t see for themselves so the steps to reproduce the bug must be explained.
  • The bug should be analyzed so that it can be described in minimum number of steps. If the steps are lengthy then the developer might postpone dealing with fixing the bug.
  • The good bug report should contain diagnosis information like log files and memory dumps, screen shots for the relevant portions of the GUI (example—to show an error message which is inappropriate).
  • If there is any relationship with the other bugs, it must be mentioned.
  • Context information must be provided.
  • The consequences of the bug and the impact of it must be stated clearly. So that the bug is fixed as early as possible.
  • The visual glitch of the bug or the video attachments will be an added value to your bug report.
  • If the report is confusing it irritates the developer and doesn’t motivate him/her to fix.

Problem report must be written seperately for each bug. The number of problem reports are equal to the number of bugs found.

I have explored on an opensource project called Akelpad (desktop application) and have prepared a problem report.

Akelpad: It is similar to notepad but has some added features than the notepad.

Problem Report Form 1
Program: Akelpad

Problem summary: Program does not let you to retract the changes made in the settings like the background color. Undo is desirable and essential.

Severity: Minor

Can u reproduce the problem: Yes

Problem: Ctrl z/Undo function does not work for settings

How to reproduce it: Open the akelpad applicationà change the settings (like background color)à undo the changes.

Reported by: Rojaramani.k Date: 02/28/07

Problem Report Form 2

Program: Akelpad

Problem summary: Failure to use the shortcut key with the same functionality. Users will expect to work with shortcut keys rather than using a mouse.

Severity: Minor

Can u reproduce the problem: Yes

Problem: The shortcut key Ctrl E for read only mode function is not working

How to reproduce it: Open a new fileàenter the dataàuse shortcut key to change into read only modeàits not working but if you go through the menu bar its working.

Reported: Rojaramani.k

Date:: 02/28/07


Problem Report Form 3

Program: Akelpad

Problem summary: The set of command line parameters where each command has different functionality is showing only one functionality ie.. Creating new file.

Severity: Serious

Can u reproduce the problem: Yes

Problem: All the command line parameters resemble the same feature of creating a new file.

How to reproduce it: Check with all the commands in the command prompt mode.

Reported: Rojaramani.k

Date: 03/05/07


Sources:

Cemkaner

Ashwinpalaparthi

sourceforge.net


Lessons learned from the open source bug reports

The study of the bug reports in the open source projects like filezilla and azreus drives home the fact that there are common areas where maximum numbers of bugs is reported.

1. Localization is one of the important aspects to be checked for; as time date currency and language vary in different countries.

2. This kind of error usually happens when one routine sends the array to another, and their definitions of the data stored in it don’t match. The array is declared as 22 but only 20 characters are accepted. (Filezilla)

3. If it is a client server application always the clocks on both the client and server systems must be synchronized.(filezilla)

4. A user friendly message must be thrown when an action is completed or when it is supposed to.If a file is uploaded to a location where there is no access, an user friendly message must be popped instead the file is displayed in the remote directory list though the file is not uploaded. (Filezilla)

5. The security related issues (like authorization and authentication) must be tested in different context. (File permissions in one server do not work in other server). (filezilla)

6. While creating or coping files of particular file system. Check for the boundaries of maximum possible file size with multiples of 8 as 1 byte =8 bits.. (Azureus: FAT32à4gb file)

7. Components developed and tested individually may work but fail when different components have to work together .Components must be tested in integration.

8. The new added features or the fixed version raised new bugs. It may be GUI error also. As the developer may write the code only for the bug and do not check how it affects the rest of the code.

9. When testing character formats it must be tested with possible different characters from the character set (special characters and non printable characters).

10. When tested for the functionality of the menu options the shortcuts keys must also be tested.

11. User relies on one behavior which is inconsistent in other areas, causing frustration. It is an incident.-- Usability testing.

12. In client/server applications, test the software with by randomly disconnecting the connection to see whether the application resumes automatically or not.

13. Cmdline switch does not prompt for password and fails to log in. This could be a control flow error. This occurs when the program does the wrong thing next.

14. The application must be tested for registry entries and registry clean up during installation. The installer exits from the error scenarios like lack of memory in the disk with a user friendly message must be verified.

source:
sourceforge.net
Cemkaner


Negative testing

There are different Schools of thought:

“Any test input that must produce an error message is indeed producing is a negative testing”.

“The software doing anything that it is not supposed to do is a negative testing”.

“Any test input or sequence that aims at crashing the application is a negative testing”.

“Showing an error when not supposed to and not showing an error when supposed to “.

“Showing that software will fail and that the failure is handled in a specified manner”.

According to my knowledge “There is nothing like negative testing but there is something called negative test case /test idea”.

Negative testing does not have a well-defined position in the SDLC and so is generally not seen as a distinct phase. Rather, it is a way of classifying some tests, and so directing part of the test effort. Typically, negative testing is most often used during system and integration testing and is designed and performed by test professionals. Negative testing is an open-ended activity and effective approach; it is also a hard-to-manage task that has the potential to produce unwelcome information.

Negative testing cannot be planned in any detailed way, but must be approached pro-actively. The negative testing is that which concentrates on failure mode, observation of failure, assessment of the risk model and finding new, unknown problems.

Negative testing is not a test design technique, but rather an approach it is possible to use many formal test design techniquesThe negative test cases can be derived by using both the black box test techniques and experienced or empirical based test techniques.

If the boundary is between valid and invalid ranges, the test case that uses the invalid value will be a negative test – for instance, using 66 in an age field that only accepts values from 18-65.

Each member of a given equivalence class should, in the context of a known test, make the system do the same thing – so the tester does not have to test every value in an equivalence class. Ranges of invalid input data can be seen as negative tests – for instance, an age field may be expected to reject all negative numbers in the same way.

Given a state transition diagram, or an equivalent understanding, it is straightforward to derive test cases that explicitly examine whether unreachable states are indeed unreachable.

Most systems have explicit and implicit restrictions and constraints. Treating these constraints as requirements can lead to a variety of negative tests. Examples:“No more than five users will use the system at the same time” – a negative test might try sixthen eight.

Typically, these tests involve measurement and observation of the system’s behavior, rather than direct tests against expectations. This is only to be expected if working outside the system’s operating parameters, and the observations can lead to an improved understanding of the system.

Failure mode and effects analysis is the basis for tests that observe the system’s behavior under conditions of failure. It is important to capture and document this information – particularly if they allow troubleshooting on the data or environment.

Testing concurrent use of resources can be a very fruitful way to discover bugs. Initial analysisinvolves identification of data, database entities, files, connections and hardware that more than one process are tried simultaneously.

Testing a system or an application on the fly, is randomly trying to break the functionality or crash the application, can include the negative testing as well.

Testing a system or an application by the testers having common knowledge of the product where it mostly fails.

Advantages:
1. Negative testing is an effective approach to maximize the bug count which is one of the objectives of testing.
2. By fixing the uncovered bugs, we allow confidence in the quality of the system.
3. To see the system’s response to all the crazy activities of the user’s and to ensure that the application does not crash.

Conclusions:·


  • All systems will fail under some combination of volume or stress, and tests that seek to find the system’s limits by driving it to failure. The failure may not be always a system crash, but it can be the failure modes of the functionality.
  • Negative testing has more of exploratory approach to get the best.· Negative testing is the core skill of experienced testers.
  • Negative testing is done having an orientation to the future, anticipating problems and taking affirmative steps to deal with rather than reacting after a situation has already occurred.
  • Negative testing is a powerful and effective approach to uncover the bug’s .The most bugs fixed increases the quality level of the software system.

Above all the value of any practice depends on its context

Sources:

Google .com

Testertested .blogspot.com

Ashwinpalaparthi

Regular Expressions

"Smart test engineer know and love “Regex”". --Ashwinpalaparthi

"Regular expressions" is eventually a popular search tool used by the programmers to find, validate, modify, or edit information. This concept of regular expressions can be used by the test engineers in the testing process for smart testing.

There would be many instances where a tester had to find and manipulate text. For example large volumes of text in a requirements document or manipulating thousands of the test cases. This can be done manually by using the “find” feature. But there are many pitfalls like only one string can be found at a time and cannot get the count of the strings. Strings having particular pattern for example pin, pan and pun cannot be searched. This consumes lot of time which in turn affects the productivity of the testing process.

Test engineer can complete the task more intelligently and increase the productivity (completing a task in less time very effectively and efficiently is said to be productive) by using a pattern language which is known as “Regular Expressions”.

Regular expression is a string that is used to describe or match a string according to certain syntax rules. It is a pattern language; any thing which has form of occurrence is a pattern. They are powerful wild-card text-processing tools. They give a concise description of a set, without having to list all elementsEg: a set containing 3 stringsHandel, ha:ndel and haendel can be described by the pattern “H(a: ae?)ndel”Eg: a set containing 3 stringsPin, pan and pun can be described by the pattern “P.n”.Regular expression is often shortened in speech to "Regex", and in writing to "Regexp" or "Regex".

The “Regular expressions” is a declarative notation that includes a powerful pattern language. Supported by all major databases, scripting languages, and programming languages, advanced editor’s support it, innumerous commercial and free libraries exists. Regex engine is a piece of software that can process regular expressions trying to match the pattern to the given string. Spaces are also used in patterns. Regex are case sensitive

Rules or syntax to be followed:

  • Any character matches itself as long as it is not a special character (and special character is nullified with “\”)
  • ”.” (Period) any single character.Eg (a.c) this means “a” followed by any single character, followed by “c”.
  • ”-“is used to define ranges.Eg: ab(2,4)cà “a” followed by 2 to 4 “b’s” and followed by “c”
  • ”^” (start) matches the start of the line (or any line, when applied in multiline mode)Eg ^(abc)à abc at the beginning of the string.
  • ”$”(end) Matches the end of the line (or any line, when applied in multiline mode)Eg: $(abc)à abc at the end of the string“^” and “$” are known as anchoring characters.
  • ”” (either or)eg: (abbc) this will match “ab” or “bc”

Scenarios where a test engineer can use “Regex”.

1· Test for a pattern within a string.For example, you can test an input string to see if a telephone number pattern or a credit card number pattern occurs within the string. This is called data validation.
2· Replace text.You can use a regular expression to identify specific text in a document and either remove it completely or replace it with other text.
3· Extract a substring from a string based upon a pattern match.You can find specific text within a document or input field.

Regex are greedy by default and try to match maximum number of strings. This can be a significant problem. But modern regular expression tools allow a quantifier to be specified as non-greedy, by putting a question mark after the quantifier.

Test engineers are reluctant to regular expressions as they find difficulty in writing the syntax. But a smart test engineer would know and learn to craft powerful time saving expressions and does testing effectively and efficiently.

Sources:

Ashwinpalaparthi

wikipedia.com

External Links:


Regular Expressions info

String tools .com

Regular Expression library

Regex Studio



characteristics of good test cases

Good writing leads to good testing”

Test case is a specific sequence of actions associated with a specific set of data elements, all of which exercise a specific function or feature of the procedure/application /system under test.

To write a test case concentrate on one requirement at a time. Based on that one requirement several real life scenarios that are likely to occur in the use of the application by an end user.Although the test cases cover the software requirements, that doesn’t mean the test cases are very good enough. A badly written tests or poor tests do indeed expose a tester to considerable risk.


Identifying good test cases:
1.To identify good test cases we have to empathize with end user scenarios.

2. Apply “deep thinking” -- the famous James Bach exampleThe requirements specification consists of:

a. These are cards.

b.There is a letter on one side and a number on the reverse.

c. If there is a vowel on one side, there will be an even number on the reverse side.

E 7 V 4

While writing test cases for the above requirements we have to deeply think for the non obvious requirements first. Remove obvious and ofcourse things. That is it need not be a vowel on the reverse side if there is a even number on one side.


Lateral thinking is thinking from all the sides. It employs alternative methods of thinking. Rather than progress in logical manner from one thought to another, the aim is to make a leap into very different areas of thought. ie...immediate perceptions are not the ends.

Characteristics of a good test case:

  • Potential to find defects: The test case should surface the defects which are yet uncovered.
  • Atomic: Tests should have single, clear and specific goal.
  • Track able/Traceable: Tests must have identifier also called as naming conventions or naming scheme or nomenclature.
  • Comprehensible: Generally tests are written by one TE and are executed by some other TE ( he may be a newly appointed one or may be after a long time other than the author or the author himself may not understand).So they must be understandable and executable by others.
  • Self contained or complete: If the TE has to execute the current test case only when the prerequisite is fulfilled. Then in such a case it must be mentioned within the same test case itself.
  • Clearly provide environment details: The test cases should include environment or context details .For example operating systems (windows, Linux, UNIX, Macintosh).
  • Repeatable: Test cases should be repeatable in nature that is it can be used any number of times.Example “Happy New Year” should be displayed on Jan 1st of any year. The test case must be repeatable for any year.
  • Uniqueness: The test case must be unique. There should be no duplicates of the previous.

Test case template consits of:
Sno ID Purpose Priority Description Pre-Condition Type Env. Details Expected Result


Purpose of writing test cases:

1.Test cases are part of the deliverables to the customer. Test case goal is credibility. Typically user acceptance test (acceptance level).
2.Test cases are good documentation for future proof.
3.Test cases are for team internal use. Here the goal of the test case is testing efficiency.
4.Test cases improve the design. The choice of writing test cases as early aspossible in the software development life cycle , as a side benefit of writing test cases we are able to find problems in the requirements as design of an application. Good test cases would be included as one of the requirements in the requirements document.
5.The process of developing test cases makes tester think through the operation of the application.
6.Test cases help us to change code throughout the life time without breaking the existing code. So we can improve the design of the program.

7.Test cases are required for automation process.
8.When new features are added new test cases are to be written to test that particular feature.

The above information drives home the fact that test cases are the biggest investment and greatest asset of a software quality team. To maximize the return on this investment through clever strategies and writing techniques, learn to make cases easy to test, increase productivity and respond to project changes.

Source:

Ashwinpalarthi

Cemkaner:what is a good test case-stickyminds.com

Tech gurus corner-Blog