Software testing

1.0 Software Testing Activities

We start testing activities from the first phase of the software development life cycle. We may generate test cases from the SRS and SDD documents and use them during system and acceptance testing. Hence, development and testing activities are carried out simultaneously in order to produce good quality maintainable software in time and within budget. We may carry out testing at many levels and may also take help of a software testing tool. Whenever we experience a failure, we debug the source code to find reasons for such a failure. Finding the reasons of a failure is very significant testing activity and consumes huge amount of resources and may also delay the release of the software.

1.1 Levels of Testing

Software testing is generally carried out at different levels. There are four such levels namely unit testing, integration testing, system testing, and acceptance testing as shown in figure 8.1. First three levels of testing activities are done by the testers and last level of testing (acceptance) is done by the customer(s)/user(s). Each level has specific testing objectives. For example, at unit testing level, independent units are tested using functional and/or structural testing techniques. At integration testing level, two or more units are combined and testing is carried out to test the integration related issues of various units. At system testing level, the system is tested as a whole and primarily functional testing techniques are used to test the system. Non functional requirements like performance, reliability, usability, testability etc. are also tested at this level. Load/stress testing is also performed at this level. Last level i.e. acceptance testing is done by the customer(s)/users for the purpose of accepting the final product.

1.1.1 Unit Testing

We develop software in parts / units and every unit is expected to have defined functionality. We may call it a component, module, procedure, function etc, which will have a purpose and may be developed independently and simultaney. A. Bertolino and E. Marchetti have defined a unit as [BERT07]:

“A unit is the smallest testable piece of software, which may consist of hundreds or even just few lines of source code, and generally represents the result of the work of one or few developers. The unit test cases purpose is to ensure that the unit satisfies its functional specification and / or that its implemented structure matches the intended design structure. [BEIZ90, PFLE01].”

There are also problems with unit testing. How can we run a unit independently? A unit may not be completely independent. It may be calling few units and also called by one or more units. We may have to write additional source code to execute a unit. A unit X may call a unit Y and a unit Y may call a unit A and a unit B as shown in figure 8.2(a). To execute a unit Y independently, we may have to write additional source code in a unit Y which may handle the activities of a unit X and the activities of a unit A and a unit B. The additional source code to handle the activities of a unit X is called “driver” and the additional source code to handle the activities of a unit A and a unit B is called “stub”. The complete additional source code which is written for the design of stub and driver is called scaffolding.

The scaffolding should be removed after the completion of unit testing. This may help us to locate an error easily due to small size of a unit. Many white box testing techniques may be effectively applicable at unit level. We should keep stubs and drivers simple and small in size to reduce the cost of testing. If we design units in such a way that they can be tested without writing stubs and drivers, we may be very efficient and lucky. Generally, in practice, it may be difficult and thus requirement of stubs and drivers may not be eliminated. We may only minimize the requirement of scaffolding depending upon the functionality and its division in various units.

1.1.2 Integration Testing

A software may have many units. We test units independently during unit testing after writing required stubs and drivers. When we combine two units, we may like to test the interfaces amongst these units. We combine two or more units because they share some relationship. This relationship is represented by an interface and is known as coupling. The coupling is the measure of the degree of interdependence between units. Two units with high coupling are strongly connected and thus, dependent on each other. Two units with low coupling are weakly connected and thus have low dependency on each other. Hence, highly coupled units are heavily dependent on other units and loosely coupled units are comparatively less dependent on other units as shown in figure 8.3.

Coupling increases as the number of calls amongst units increases or the amount of shared data increases. The design with high coupling may have more errors. Loose coupling minimize the interdependence and some of the steps to minimize the coupling are given as:

(i) Pass only data, not the control information.

(ii) Avoid passing undesired data.

(iii) Minimize parent / child relationship between calling and called units.

(iv) Minimize the number of parameters to be passed between two units.

(v) Avoid passing complete data structure.

(vi) Do not declare global variables.

(vii) Minimize the scope of variables.

Different types of coupling are data (best), stamp, control, external, common and content (worst). When we design test cases for interfaces, we should be very clear about the coupling amongst units and if it is high, large number of test cases should be designed to test that particular interface.

A good design should have low coupling and thus interfaces become very important. When interfaces are important, their testing will also be important. In integration testing, we focus on the issues related to interfaces amongst units. There are several integration strategies that really have little basis in a rational methodology and are given in figure 8.4. Top down integration starts from the main unit and keeps on adding all called units of next level. This portion should be tested thoroughly by focusing on interface issues. After completion of integration testing at this level, add next level of units and as so on till we reach the lowest level units (leaf units). There will not be any requirement of drivers and only stubs will be designed. In bottom-up integration, we start from the bottom, (i.e. from leaf units) and keep on adding upper level units till we reach the top (i.e. root node). There will not be any need of stubs. A sandwich strategy runs from top and bottom concurrently, depending upon the availability of units and may meet somewhere in the middle.

(b) Bottom up integration (focus starts from edges i, j and so on)

c) Sandwich integration (focus starts from a, b, i, j and so on)

Each approach has its own advantages and disadvantages. In practice, sandwich integration approach is more popular. This can be started as and when two related units are available. We may use any functional or structural testing techniques to design test cases.

The functional testing techniques are easy to implement with a particular focus on the interfaces and some structural testing techniques may also be used. When a new unit is added as a part of integration testing then the software is considered as a changed software. New paths are designed and new input(s) and output(s) conditions may emerge and new control logic may invoke. These changes may also cause problems with units that previously worked flawlessly.

1.1.3 System Testing

We perform system testing after the completion of unit and integration testing. We test complete software alongwith its expected environment. We generally use functional testing techniques, although few structural testing techniques may also be used.

A system is defined as a combination of the software, hardware and other associated parts that together provide product features and solutions. System testing ensures that each system function works as expected and it also tests for non-functional requirements like performance, security, reliability, stress, load etc. This is the only phase of testing which tests both functional and non-functional requirements of the system. A team of the testing persons does the system testing under the supervision of a test team leader. We also review all associated documents and manuals of the software. This verification activity is equally important and may improve the quality of the final product.

Utmost care should be taken for the defects found during system testing phase. A proper impact analysis should be done before fixing the defect. Sometimes, if system permits, instead of fixing the defects are just documented and mentioned as the known limitation. This may happen in a situation when fixing is very time consuming or technically it is not possible in the present design etc. Progress of system testing also builds confidence in the development team as this is the first phase in which complete product is tested with a specific focus on customer’s expectations. After the completion of this phase, customers are invited to test the software.

1.1.4 Acceptance Testing

This is the extension of system testing. When testing team feels that the product is ready for the customer(s), they invite the customer(s) for demonstration. After demonstration of the product, customer(s) may like to use the product for their satisfaction and confidence. This may range from adhoc usage to systematic well-planned usage of the product. This type of usage is essential before accepting the final product. The testing done for the purpose of accepting a product is known as acceptance testing. This may be carried out by the customer(s) or persons authorized by the customer. The venue may be developer’s site or customer’s site depending on the mutual agreement. Generally, acceptance testing is carried out at the customer’s site. Acceptance testing is carried out only when the software is developed for a particular customer(s). If, we develop software for anonymous customers (like operating systems, compilers, case tools etc), then acceptance testing is not feasible. In such cases, potential customers are identified to test the software and this type of testing is called alpha / beta testing. Beta testing is done by many potential customers at their sites without any involvement of developers / testers. Although alpha testing is done by some potential customers at developer’s site under the direction and supervision of testers.

1.2 Debugging

Whenever a software fails, we would like to understand the reason(s) of such a failure. After knowing the reason(s), we may attempt to find solution and may make necessary changes in the source code accordingly. These changes will hopefully remove the reason(s) of that software failure. The process of identifying and correcting a software error is known as debugging. It starts after receiving a failure report and completes after ensuring that all corrections have been rightly placed and the software does not fail with the same set of input(s). The debugging is quite a difficult phase and may become one of the reasons of the software delays.

Every bug detection process is different and it is difficult to know how long it will take to find and fix a bug. Sometimes, it may not be possible to detect a bug or if a bug is detected, it may not be feasible to correct it at all. These situations should be handled very carefully. In order to remove bugs, developer must first discover that a problem exists, then classify the bug, locate where the problem actually lies in the source code, and finally correct the problem.

Read also  IT Essay: Man and Machine, I wonder if we can coexist

1.2.1 Why debugging is so difficult?

Debugging is a difficult process. This is probably due to human involvement and their psychology. Developers become uncomfortable after receiving any request of debugging. It is taken against their professional pride. Shneiderman [SHNE80] has rightly commented on human aspect of debugging as:

“It is one of the most frustrating parts of programming. It has elements of problem solving or brain teasers, coupled with the annoying recognition that we have made a mistake. Heightened anxiety and the unwillingness to accept the possibility of errors, increase the task difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug is ultimately corrected.”

These comments explain the difficulty of debugging. Pressman [PRES97] has given some clues about the characteristics of bugs as:

“The debugging process attempts to match symptom with cause, thereby leading to error correction. The symptom and the cause may be geographically remote. That is, symptom may appear in one part of program, while the cause may actually be located in other part. Highly coupled program structures may further complicate this situation. Symptom may also disappear temporarily when another error is corrected. In real time applications, it may be difficult to accurately reproduce the input conditions. In some cases, symptom may be due to causes that are distributed across a number of tasks running on different processors”.

There may be many reasons which may make debugging process difficult and time consuming. However, psychological reasons are more prevalent over technical reasons. Over the years, debugging techniques have substantially improved and they will continue to develop significantly in the near future. Some debugging tools are available and they minimize the human involvement in the debugging process. However, it is still a difficult area and consumes significant amount of time and resources.

1.2.2 Debugging Process

Debugging means detecting and removing bugs from the programs. Whenever a program generates an unexpected behaviour, it is known as a failure of the program. This failure may be mild, annoying, disturbing, serious, extreme, catastrophic or infectious. Depending on the type of failure, actions are required to be taken. Debugging process starts after receiving a failure report either from testing team or from users. The steps of the debugging process are replication of the bug, understanding the bug, locate the bug, fix the bug and retest the program.

(i) Replication of the bug:

The first step in fixing a bug is to replicate it. This means to recreate the undesired behaviour under controlled conditions. The same set of input(s) should be given under similar conditions to the program and the program, after execution, should produce similar unexpected behaviour. If this happens, we are able to replicate a bug. In many cases, this is simple and straight forward. We execute the program on a particular input(s) or we press a particular button on a particular dialog, and the bug occurs. In other cases, replication may be very difficult. It may require many steps or in an interactive program such as a game, it may require precise timing. In worst cases, replication may be nearly impossible. If we do not replicate the bug, how will we verify the fix? Hence, failure to replicate a bug is a real problem. If we cannot do it, any action, which cannot be verified, has no meaning, how so ever important it may be. Some of the reasons for non-replication of bug are:

· The user incorrectly reported the problem.

· The program has failed due to hardware problems like memory overflow, poor network connectivity, network congestion, non availability of system buses, deadlock conditions etc.

· The program has failed due to system software problems. The reason may be the usage of different type of operating system, compilers, device drivers etc. there may be any above mentioned reason for the failure of the program, although there is no inherent bug in program for this particular failure.

Our effort should be to replicate the bug. If we cannot do so, it is advisable to keep the matter pending till we are able to replicate it. There is no point in playing with the source code for a situation which is not reproducible.

(ii) Understanding the bug

After replicating the bug, we may like to understand the bug. This means, we want to find the reason(s) of this failure. There may be one or more reasons and is generally the most time consuming activity. We should understand the program very clearly for understanding a bug. If we are the designers and source code writers, there may not be any problem for understanding the bug. If not, then we may even have more serious problems. If readability of the program is good and associated documents are available, we may be able to manage the problem. If readability is not that good, (which happens in many situations) and associated documents are not proper, situation becomes very difficult and complex. We may call the designers, if we are lucky, they may be available with the company and we may get them. Imagine otherwise, what will happen? This is a real challenging situation and in practice many times, we have to face this and struggle with the source code and documents written by the persons not available with the company. We may have to put effort in order to understand the program. We may start from the first statement of the source code to the last statement with a special focus on critical and complex areas of the source code. We should be able to know, where to look in the source code for any particular activity. It should also tell us the general way in which the program acts.

The worst cases are large programs written by many persons over many years. These programs may not have consistency and may become poorly readable over time due to various maintenance activities. We should simply do the best and try to avoid making the mess worse. We may also take the help of source code analysis tools for examining the large programs. A debugger may also be helpful for understanding the program. A debugger inspects a program statement wise and may be able to show the dynamic behaviour of the program using a breakpoint. The breakpoints are used to pause the program at any time needed. At every breakpoint, we may look at values of variables, contents of relevant memory locations, registers etc. The main point is that in order to understand a bug, program understanding is essential. We should put desired effort before finding the reasons of the software failure. If we fail to do so, unnecessarily, we may waste our effort, which is neither required nor desired.

(iii) Locate the bug

There are two portions of the source code which need to be considered for locating a bug. First portion of the source code is one which causes the visible incorrect behaviour and second portion of the source code is one which is actually incorrect. In most of the situations, both portions may overlap and sometimes, both portions may be in different parts of the program. We should first find the source code which causes the incorrect behaviour. After knowing the incorrect behaviour and its related portion of the source code, we may find the portion of the source code which is at fault. Sometimes, it may be very easy to identify the problematic source code (second portion of the source code) with manual inspection. Otherwise, we may have to take the help of a debugger. If we have core dumps, a debugger can immediately identify the line which fails. A core dumps is the printout of all registers and relevant memory locations. We should document them and also retain them for possible future use. We may provide breakpoints while replicating the bug and this process may also help us to locate the bug.

Sometimes simple print statements may help us to locate the sources of the bad behaviour. This simple way provides us the status of various variables at different locations of the program with specific set of inputs. A sequence of print statements may also portray the dynamics of variable changes. However, it is cumbersome to use in large programs. They may also generate superfluous data which may be difficult to analyze and manage.

Another useful approach is to add check routines in the source code to verify that data structures are in a valid state. Such routines may help us to narrow down where data corruption occurs. If the check routines are fast, we may want to always enable them. Otherwise, leave them in the source code, and provide some sort of mechanism to turn them on when we need them.

The most useful and powerful way is to do the source code inspection. This may help us to understand the program, understand the bug and finally locate the bug. A clear understanding of the program is an absolute requirement of any debugging activity. Sometimes, bug may not be in the program at all. It may be in a library routine or in the operating system, or in the compiler. These cases are very rare, but there are chances and if everything fails, we may have to look for such options.

(iv) Fix the bug and retest the program

After locating the bug, we may like to fix the bug. The fixing of a bug is a programming exercise rather than a debugging activity. After making necessary changes in the source code, we may have to retest the source code in order to ensure that the corrections have been rightly done at right place. Every change may affect other portions of the source code also. Hence an impact analysis is required to identify the affected portion and that portion should also be retested thoroughly. This retesting activity is called regression testing which is very important activity of any debugging process.

1.2.3 Debugging Approaches

There are many popular debugging approaches, but success of any approach is dependant upon the understanding of the program. If the persons involved in debugging understand the program correctly, they may be able to detect and remove the bugs.

(i) Trial and Error Method

This approach is dependent on the ability and experience of the debugging persons. After getting a failure report, it is analyzed and program is inspected. Based on experience and intelligence, and also using hit and trial technique, the bug is located and a solution is found. This is a slow approach and becomes impractical in large programs.

(ii) Backtracking

This can be used successfully in small programs. We start at the point where program gives incorrect result such as unexpected output is printed. After analyzing the output, we trace backward the source code manually until a cause of the failure is found. The source code from the statement where symptoms of failure is found to the statement where cause of failure is found is analyzed properly. This technique brackets the locations of the bug in the program. Subsequent careful study of bracketed location may help us to rectify the bug. Another obvious variation of backtracking is forward tracking, where we use print statements or other means to examine a succession of intermediate results to determine at what point the result first became wrong. These approaches (backtracking and forward tracking) may be useful only when the size of the program is small. As the program size increases, it becomes difficult to manage these approaches.

Read also  Intrusion detection system for internet

(iii) Brute Force

This is probably the most common and efficient approach to identify the cause of a software failure. In this approach, memory dumps are taken and run time traces are invoked and the program is loaded with print statements. When this is done, we may find a clue by the information produced which leads to identification of cause of a bug. Memory traces are similar to memory dumps, except that the printout contains only certain memory and register contents and printing is conditional on some event occurring. Typically conditional events are entry, exit or use of one of the following:

(a) A particular subroutine, statement or database

(b) Communication with I/O devices

(c) Value of a variable

(d) Timed actuations (periodic or random) in certain real time system.

A special problem with trace programs is that the conditions are entered in the source code and any changes require a recompilation. The huge amount of data is generated which although may help to identify the cause but may be difficult to manage and analyze.

(iv) Cause Elimination

Cause elimination is manifested by induction or deduction and also introduces the concept of binary partitioning. Data related to error occurrence are organized to isolate potential causes. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. Therefore, we may rule out causes one by one until a single one remains for validation. The cause is identified, properly fixed and retested accordingly.

1.2.4 Debugging Tools

Many debugging tools are available to support the debugging process. Some of the manual activities can also be automated using a tool. We may need a tool that may execute every statement of a program at a time and print values of any variable after executing every statement of the program. We will be free from inserting print statements in the program manually. Thus, run time debuggers are designed. In principle, a run time debugger is nothing more than an automatic print statement generator. It allows us to trace the program path and the variables without having to put print statements in the source code. Every compiler available in the market comes with run time debugger. It allows us to compile and run the program with a single compilation, rather than modifying the source code and recompiling as we try to narrow down the bug.

Run time debuggers may detect bugs in the program, but may fail to find the causes of failures. We may need a special tool to find causes of failures and correct the bug. Some errors like memory corruption and memory leaks may be detected automatically. The automation was the modification in debugging process, because it automated the process of finding the bug. A tool may detect an error, and our job is to simply fix it. These tools are known as automatic debugger and come in several varieties. The simplest ones are just a library of functions that can be linked into a program. When the program executes and these functions are called, the debugger checks for memory corruption, if it finds this, it reports it.

Compilers are also used for finding bugs. Of course, they check only syntax errors and particular type of run time errors. Compilers should give proper and detailed messages of errors that will be of great help to the debugging process. Compilers may give all such information in the attribute table, which is printed along with the listing. The attribute table contains various levels of warnings which have been picked up by the compiler scan and which are noted. Hence, compilers are coming with error detection feature and there is no excuse to design compilers without meaningful error messages.

We may apply wide variety of tools like run time debugger, automatic debugger, automatic test case generators, memory dumps, cross reference maps, compilers etc during the debugging process. However, tools are not the substitute for careful examination of the source code after thorough understanding.

1.3 Software Testing Tools

The most important effort consuming task in software testing is to design the test cases. The execution of these test cases may not require much time and resources. Hence, designing part is more significant than execution part. Both parts are normally handled manually. Do we really need a tool? If yes, where and when can we use it? In first part (designing of test cases) or second part (execution of test cases) or both. Software testing tools may be used to reduce the time of testing and to make testing as easy and pleasant as possible. Automated testing may be carried out without human involvement. This may help us in the areas where similar data set is to be given as input to the program again and again. A tool may do the repeated testing, unattended also, during nights or weekends without human intervention.

Many non-functional requirements may be tested with the help of a tool. We want to test the performance of a software under load, which may require many computers, manpower and other resources. A tool may simulate multiple users on one computer and also a situation when many users are accessing a database simultaneously.

There are three broad categories of software testing tools i.e. static, dynamic and process management. Most of the tools fall clearly into one of the categories but there are few exceptions like mutation analysis system which falls in more than one the categories. A wide variety of tools are available with different scope and quality and they assist us in many ways.

1.3.1 Static software testing tools

Static software testing tools are those that perform analysis of the programs without executing them at all. They may also find the source code which will be hard to test and maintain. As we all know, static testing is about prevention and dynamic testing is about cure. We should use both the tools but prevention is always better than cure. These tools will find more bugs as compared to dynamic testing tools (where we execute the program). There are many areas for which effective static testing tools are available, and they have shown their results for the improvement of the quality of the software.

(i) Complexity analysis tools

Complexity of a program plays very important role while determining its quality. A popular measure of complexity is the cyclomatic complexity as discussed in chapter 4. This gives us the idea about the number of independent paths in the program and is dependent upon the number of decisions in the program. Higher value of cyclomatic complexity may indicate about poor design and risky implementation. This may also be applied at module level and higher cyclomatic complexity value modules may either be redesigned or may be tested very thoroughly. There are other complexity measures also which are used in practice like Halstead software size measures, knot complexity measure etc. Tools are available which are based on any of the complexity measure. These tools may take the program as an input, process it and produce a complexity value as output. This value may be an indicator of the quality of design and implementation.

(ii) Syntax and Semantic Analysis Tools

These tools find syntax and semantic errors. Although compiler may detect all syntax errors during compilation, but early detection of such errors may help to minimize other associated errors. Semantic errors are very significant and compilers are helpless to find such errors. There are tools in the market that may analyze the program and find errors. Non-declaration of a variable, double declaration of a variable, divide by zero issue, unspecified inputs, non-initialization of a variable are some of the issues which may be detected by semantic analysis tools. These tools are language dependent and may parse the source code, maintain a list of errors and provide implementation information. The parser may find semantic errors as well as make an inference as to what is syntactically correct.

(iii) Flow graph generator tools

These tools are language dependent and take the program as an input and convert it to its flow graph. The flow graph may be used for many purposes like complexity calculation, paths identification, generation of definition use paths, program slicing etc. These tools assist us to understand the risky and poorly designed areas of the source code.

(iv) Code comprehension tools

These tools may help us to understand the unfamiliar source code. They may also identify dead source code, duplicate source code and areas that may require special attention and should be reviewed seriously.

(v) Code Inspectors

A source code inspectors does the simple job of enforcing standards in a uniform way for many programs. They inspect the programs and force us to implement the guidelines of good programming practices. Although, they are language dependent but most of the guidelines of good programming practices are similar in many languages. These tools are simple and may find many critical and weak areas of the program. They may also suggest possible changes in the source code for improvement.

1.3.2 Dynamic testing tools

Dynamic software testing tools select test cases and execute the program to get the results. They also analyze the results and find reasons of failures (if any) of the program. They will be used after the implementation of the program and may also test non-functional requirements like efficiency, performance, reliability etc.

(i) Coverage analysis tools

These tools are used to find the level of coverage of the program after executing the selected test cases. They give us idea about the effectiveness of selected test cases. They highlight the unexecuted portion of the source code and force us to design special test cases for that portion of the source code. There are many levels of coverage like statement coverage, branch coverage, condition coverage, multiple condition coverage, path coverage etc. We may like to ensure that at least every statement must be executed once and every outcome of branch statement must be executed once. This minimum level of coverage may be shown by a tool after executing appropriate set of test cases. There are tools available for checking statement coverage, branch coverage, condition coverage, multiple conditions coverage and path coverage. The profiler displays the number of times each statement is executed. We may study the output to know which portion of the source code is not executed. We may design test cases for those portions of the source code in order to achieve the desired level of coverage. Some tools are also available to check whether the source code is as per standards or not and also generate number of commented lines, number of non-commented lines, number of local variables, number of global variables, duplicate declaration of variables etc. Some tools check the portability of the source code. A source code is not portable if some operating system dependent features are used. Some tools are Automated QA’s time, Parasoft’s Insure++, Telelogic’s Logicscope.

Read also  IT Essay: Information Technology Outsourcing

(ii) Performance testing tools

We may like to test the performance of the software under stress / load. For example, if we are testing a result management software, we may observe the performance when 10 users are entering the data and also when 100 users are entering the data simultaneously. Similarly, we may like to test a website with 10 users, 100 users, 1000 users working simultaneously. This may require huge resources and sometime it may not be possible to create such real life environment for testing in the company. A tool may help us to simulate such situations and test these situations in various stress conditions. This is the most popular area for the usage of any tool and many popular tools are available in the market. These tools simulate multiple users on a single computer. We may also see the response time for a database when 10 users access the database, when 100 users access the database and when 1000 users access the data base simultaneously. Will response time be 10 seconds or 100 seconds or even 1000 seconds? No user may like to tolerate the response time in minutes. Performance testing is also called load or stress testing. Some of the popular tools are Mercury Interactive’s Load Runner, Apache’s J Meter, Segue Software’s Silk Performer, IBM Rational’s Performance Tester, Comuware’s QALOAD, AutoTester’s AutoController.

(iii) Functional / Regression Testing Tools

These tools are used to test the software on the basis of its functionality without considering the implementation details. They may also generate test cases automatically and execute them without human intervention. Many combinations of inputs may be considered for generating test cases automatically and these test cases may be executed, thus, relieving us from repeated testing activities. Some of the popular available tools are IBM Rational’s Robot, Mercury Interactive’s Win Runner, Comuware’s QA Centre, Segue Software’s Silktest.

1.3.3 Process management Tools

These tools help us to manage and improve the software testing process. We may create a test plan, allocate resources, prepare schedule for unattended testing, tracking the status of a bug using such tools. They improve many aspects of testing and make it a disciplined process. Some of the tools are IBM Rational Test Manager, Mercury Interactive’s Test Director, Segue Software’s Silk Plan Pro, Compuware’s QA Director. Some configuration management tools are also available which may help bug tracking, its management and correctness like IBM Rational Software’s Clear DDTs, Bugzilla, Samba’s Jitterbug.

Selection of any tool is dependent upon application, expectations, quality requirements and available trained manpower in the organization. Tools assist us to make testing effective, efficient and performance oriented.

1.4 Software Test Plan

It is a document to specify the systematic approach to plan the testing activities of the software. If we carry out testing as per well designed systematic test plan document, the effectiveness of testing will improve and that may further help to produce a good quality product. The test plan document may force us to maintain certain level of standards and disciplined approach to testing. Many software test plan documents are available, but the most popular document is the IEEE standard for Software Test Documentation (Std 829 – 1998). This document addresses the scope, schedule, milestones and purpose of various testing activities. It also specifies the items and features to be tested and features which are not to be tested. Pass/fail criteria, roles and responsibilities of persons involved, associated risks and constraints are also described in this document. The structure of the IEEE 829 – 1998 test plan document is given in table 8.1 [IEEE98c]. All ten sections have specific purpose. Some changes may be made as per requirement of the project. Test plan document is prepared after the completion of the SRS document and may be modified along with the progress of the project. We should clearly specify the test coverage criteria and testing techniques to achieve the criteria. We should also describe who will perform testing, at what level and when. Roles and responsibilities of testers must be clearly documented.

IEEE standard for software test documentation (829 – 1998)


1. Introduction

1.1 Objectives

1.2 Testing strategy

1.3 Scope

1.4 Reference Material

1.5 Definition and Acronym

Overview of the project

2. Test Items

(A) Software Documentations to be tested

2.1 Requirements specification

2.2 Design specification

2.3 Users guide

2.4 Operations guide

2.5 Installation guide

2.6 Other available documents

(B) Source code to be tested

2.7 Verification activities

2.8 Validation activities

Documentation and source code to be tested

3. Features to be tested

Include all features and combinations of features

4. Features not to be tested

List out such features along with reasons

5. Approach

5.1 Unit testing

5.2 Integration testing

5.3 System testing

5.4 Acceptance testing

5.5 Regression testing

5.6 Any other testing

Describe over all approach of testing

6. Pass / Fail criteria

6.1 Suspension criteria

6.2 Resumption criteria

6.3 Approval criteria

Criteria to be used for pass / fail

7. Testing process

7.1 Test Deliverables

7.2 Testing tasks

7.3 Responsibility

7.4 Resources

7.5 Schedules

Specify testing processes

1. Environmental requirements

1.1 Hardware

1.2 Software

1.3 Security

1.4 Tools

1.5 Publications

1.6 Risks and Assumptions

Identify environmental requirement for testing

9. Change management procedures

Identify change procedures

10. Plan approvals

Identify plan approvers. They should sign the document after approval.


Note: Select most appropriate answer of the following questions.

1.1 The purpose of acceptance testing is:

(a) To find faults in the system

(b) To ensure the correctness of the system

(c) To test the system from business perspective

(d) To demonstrate the effectiveness of the system

1.2 Which of the following is not part of system testing?

(a) Performance, load and stress testing

(b) Bottom up integration testing

(c) Usability testing

(d) Business perspective testing

1.3 Which of the following is not the integration testing strategy?

(a) Top down

(b) Bottom up

(c) Sandwich

(d) Design based

1.4 Which is not covered under the category of static testing tools?

(a) Complexity analysis tools

(b) Coverage analysis tools

(c) Syntax and semantic analysis tools

(d) Code Inspectors

1.5 Which is not covered under the category of dynamic testing tools?

(a) Flow graph generator tools

(b) Performance testing tools

(c) Regression testing tools

(d) Coverage analysis tools

1.6 Which is not a performance testing tool?

(a) Mercury Interactive’s Load Runner

(b) Apache’s J Meter,

(c) IBM Rational’s Performance tester

(d) Parasoft’s Insure ++

1.7 Select functional / Regression testing tool out of the following:

(a) IBM Rational’s Robot

(b) Comuware’s QALOAD

(c) Automated QA’s time

(d) Telelogic’s Logic scope

1.8 Find a process management tool out of the following:

(a) IBM Rational Test Manager

(b) Mercury Interactive’s Test Director

(c) Segue Software’s Silk Plan Pro

(d) All of the above

1.9 Which is not a coverage analysis tool?

(a) Automated QA’s time

(b) Parasoft’s Insure ++

(c) Telelogic’s Logic Scope

(d) Apache’s J Meter

1.10 Which is not a functional / regression testing tool?

(a) Mercury Interactive Win Runner

(b) IBM Rational’s Robot

(c) Bugzilla

(d) Segue Software’s Silk test

1.11 Which is not the specified testing level?

(a) Integration testing

(b) Acceptance testing

(c) Regression testing

(d) System testing

1.12 Which type of testing is done by the customers?

(a) Unit testing

(b) Integration testing

(c) System testing

(d) Acceptance testing

1.13 Which one is not a step to minimize the coupling?

(a) Pass only control information not data

(b) Avoid passing undesired data

(c) Do not declare global variables

(d) Minimize the scope of variables

1.14 Choose the most desirable type of coupling:

(a) Data coupling

(b) Stamp coupling

(c) Control coupling

(d) Common coupling

1.15 Choose the worst type of coupling

(a) Stamp coupling

(b) Content coupling

(c) Common coupling

(d) Control coupling

1.16 Which is most popular integration testing approach?

(a) Bottom up integration

(b) Top down integration

(c) Sandwich integration

(d) None of the above

1.17 Which is not covered in the debugging process?

(a) Replication of the bug

(b) Understanding of the bug

(c) Selection of bug tracking tool

(d) Fix the bug and retest the program

1.18 Which is not a debugging approach?

(a) Brute force

(b) Backtracking

(c) Cause elimination

(d) Bug multiplication

1.19 Binary partitioning is related to:

(a) Cause elimination

(b) Brute force

(c) Backtracking

(d) Trial and Error method

1.20 Which is not a popular debugging tool?

(a) Run time debugger

(b) Compiler

(c) Memory dumps

(d) Samba’s Jitterbug

1.21 Finding reasons of a failure is known as:

(a) Debugging

(b) Testing

(c) Verification

(d) Validation

1.22 Which of the following term is not used for a unit?

(a) Component

(b) Module

(c) Function

(d) Documentation

1.23 Non-functional requirements testing is performed at the level of:

(a) System testing

(b) Acceptance testing

(c) Unit testing

(d) (a) and (b) both

1.24 Debugging process attempts to match:

(a) Symptom with cause

(b) Cause with inputs

(c) Symptoms with outputs

(d) Inputs with outputs

1.25 Static testing tools perform the analysis of programs:

(a) After their execution

(b) Without their execution

(c) During their execution

(d) None of the above


1.1 What are various levels of testing? Explain the objectives of every level. Who should do testing at every level and why?

1.2 Is unit testing possible or even desirable in all circumstances? Justify your answer with examples.

1.3 What is scaffolding? Why do we use stubs and drivers during unit testing?

1.4 What are various steps to minimize the coupling amongst various units? Discuss different types of coupling from best coupling to the worst coupling.

1.5 Compare the top down and bottom up integration testing approaches to test a program.

1.6 What is debugging? Discuss two debugging techniques. Write features of these techniques and compare the important features.

1.7 Why is debugging so difficult? What are various steps of a debugging process?

1.8 What are popular debugging approaches? Which one is more popular and why?

1.9 Explain the significance of debugging tools. List some commercially available debugging tools.

1.10 (a) Discuss the static and dynamic testing tools with the help of examples.

(b) Discuss some of the areas where testing cannot be performed effectively without the help of a testing tool.

1.11 Write short notes on:

(i) Coverage analysis tools

(ii) Performance testing tools

(iii) Functional / Regression testing tools

1.12 What are non-functional requirements? How can we use software tools to test these requirements? Discuss some popular tools along with their areas of applications.

1.13 Explain stress, load and performance testing.

1.14 Differentiate between the following:

(a) Integration testing and system testing

(b) System testing and acceptance testing

(c) Unit testing and integration testing

(d) Testing and debugging

1.15 What are the objectives of process management tools? Describe the process of selection of such a tool. List some commercially available process management tools.

1.16 What is the use of a software test plan document in testing? Is there any standard available?

1.17 Discuss the outline of a test plan document as per IEEE Std 829-1998.

1.18 Consider the problem of the URS given in chapter 5, and design a software test plan document.

1.19 Which is the most popular level of testing a software in practice and why?

1.20 Which is the most popular integration testing approach? Discuss with suitable examples.

Order Now

Order Now

Type of Paper
Number of Pages
(275 words)