Verification And Validation Approaches Information Technology Essay

Verification and validation are used to ensure that the software system meets its purpose and specification. Verification and validation also ensure that the software system must be good enough for its intended use. Verification and validation are essential parts of the software development process. One of the first questions that a person who is promoting a system’s model is to be asked if his model has been validated. Any engineered product can be checked in one of two ways: The first way is to check the specified functions of the product, i.e. if the product is doing the right thing which is validation. The second way is to check. the internal workings of the product, i.e., if the product is doing the thing right which is verification.

18.1 Verification and Validation

Verification makes sure that the software being developed behaves and produces the desired outcomes in terms of functionality, based on the specified requirements. It makes sure that the intended behavior of the product is designed right.

Validation makes sure that the final product meets system requirements. This is done through testing the final product and is only carried out once the verification stage is carried out.

Verification & Validation can be done statically or dynamically. Static V&V is concerned with analysis of the static system representation, in order to discover problems, i.e., software inspections. Static V&V may be supplemented by tool-based document and code analysis. Dynamic V&V is concerned with Software testing, i.e., concerned with exercising and observing product behavior. The system is executed with test data and its operational behavior is observed.

Verification and Validation are a series of technical and managerial activities performed by an entity (which is referred to as independent V&V) other than the developer of a system to improve the quality and reliability of the system and assure the developed product satisfies the needs of the customers. This independent V&V could be a group within the organization or an external group (outside agency) to the organization. The developer is referred to as internal V&V. Use of V&V is accompanied by testing in order to improve quality assurance, and to reduce risk.

Validation and verification processes go hand in hand, but validation process starts after verification process ends (once the coding of the program/software is finished). Each verification activity (such as Requirement specification verification, functional design verification etc.) has its corresponding validation activity (such as functional validation/testing, code validation/testing, system/integration validation etc.).

Techniques and practices used in verification and validation need to be designed carefully. The planning process needs to be carried out from the initial stage of the development cycle. Software reviews, testing, and walkthroughs are examples of verification methods and techniques.

Verification and Validation are Time Consuming activities. They Involve Planning from the Beginning, the Development of Test Cases, Actual Testing, and Analysis of Testing Results.

In the next subsections we discuss both Verification and Validation in some detail.

18.1.1 Verification

Verification is a formal proof that an implementation of the model and the corresponding program(s) satisfy the specification. The software being developed should adhere to the pre-defined requirements at all stages of the development cycle, and in order to do this, verification is carried out. Verification is the process of checking to ensure that they perform as intended. It answers the following questions: Is the logic of the model correctly implemented? Is it doing the product right. Verification ensures that every step in the process of building the software delivers the correct product. Verification is done through testing, mathematical proofs, informal reasoning, reviews, walk-through and conducting interviews.

Read also  The Ipremier Dos Attack

During the verification, a part of the product that is deemed to be fully working and the various documents associated with it are checked by a number of ‘verifiers’ in order to discover and locate any errors or bugs in the software. This helps to cut down in any errors that may cause the system to fail in the future. Statistical certainty of the system’s performance is increased as model verification proceeds where important cases are tested AND more tests are performed, errors are identified, and corrections are made to the underlying model.

Goals and requirements of verification include the following:

Everything (all ready parts of the system) must be verified, All design processes and all of the products of these processes must be verified.

The results of verification may not necessarily be binary. The process of verification does not need to be described as yes or no activity. The verification process can not ensure that the software is absolutely error free since correctness of the system is relative. The presence of the defects in large and complex systems cannot be completely avoided, and some defects and be tolerated.

No software system model will ever be fully verified, or guarantees 100% error-free implementation. A high degree of statistical certainty is all that can be realized for any model as more cases are tested.

Verification may be objective or subjective. Some instances of verification may be the results of an objective activities.

Implicit qualities must also be verified. Some requirements may be left out from the documents because they are implicit.

Correctness of the system alone does not imply that the program matches the intentions. Basically there are two ways for the verification of Specifications of systems:

Observe the system’s dynamic behavior and determine if it matches expectations

Analyze the properties of the system that can be deduced from the designed system.

18.1.2 Validation

Validation checks whether the specification correctly represents the informal requirements. Validation answers the question: Are we building the right product?

Validation is the process of finding out if the product being built is right, i.e., whatever the software product which is being developed, carries out what the end-user is expecting from it. The software being developed should satisfy all the requirements in order to do what it is intended to do. Validation ensures that software being developed or changed satisfies functional and all other requirements.

Validation determines whether the model, as a conceptualization or an abstraction, is a meaningful and accurate representation (in important aspects) of the real system. It is the process of doing actual testing in the source code.

Validation ensures that the model meets its intended requirements in terms of the methods employed and the results obtained. The ultimate goal of validation is to make the model useful in the sense that the model addresses the right problem, provides accurate information about the system being modeled, and makes the model actually usable.

Validation is carried out throughout the various stages of the development cycle as well as at the end of it, in order to make sure that the pre-defined requirements are adhered to.

The validation process makes use of various testing methods (such as test plans and test cases) which are used throughout the development cycle. The phases involved in the validation process are: code validation/testing, integration validation/integration testing, functional validation/functional testing, and system/user acceptance testing/validation.

Read also  Software Development Life Cycle Models

18.2 Differences between Validation and Verification

Table 1 shows the difference between validation and verification.




Did we build the right product?

Did we build the product right? Does each function work correctly?

Does the system implement the requirements?

Does the product match the specification?

Validation is carried out at the end of development cycle.

Certain deliverables at various stages of the development cycle are reviewed in order to make sure that the progress and quality of the work is at a desired level.

Views the complete system exactly and focuses on smaller sub-Systems.

Accessing data

Is the right data being accessed?

Is the data being accessed the right/correct way?

Level of activity



Carried out after a finished product of the software is produced, to make sure it meets the specified requirements. This also applies to prototypes.

Carried out at various stages of the development cycle, using testing, walkthroughs, developer and end-user feedback.

Is concerned with the correctness of the finished product in relation to the end-user needs and requirements.

Is concerned with the correctness and consistency of the software being developed during the development stage.

Which is implemented first

Validation is carried out before verification.

Verification is carried out before validation.

What is evaluated?

The software/end product.

The way in which the software/end product is developed.

Table 1: Difference between validation and verification

18.3 Techniques of Software Verification and Validation

In this section we present techniques of software verification and validation. We also discuss various categories of testing of these two approaches.

18.3.1 Techniques of Software Verification

There are two major categories of techniques of software verification (Techniques used for verification testing):

Dynamic testing: Dynamic testing is concerned with software testing. It includes exercising and observing product behaviour where the system is executed with test data and its operational behaviour is observed. Dynamic testing Involves execution of a system or component, selection of a group of test cases consisting of test data, and finding out output test results and output of input test cases.

Sub categories of dynamic testing include the following:

Functional testing: Functional testing involves identification and testing of all functions of the system as defined in basic requirements documents. It is a black box testing and uses test cases designed to investigate certain features of the System.

Structural testing: Structural testing is a white box testing. It expects full knowledge of the implementation of the system. The information of the internal structure of the system is used to design tests for checking the function of individual components.

Random testing: Random testing uses a selection of test cases out of a set of all possible test cases. It detects faults which go undetected by other systematic testing techniques. The exhaustive testing is a form of random testing that involves input test cases having every possible set of input values.

Static Testing: Static testing is concerned with software inspections. It analyses the static system representation and discovers problems. It may be supplemented by tool-based document and code analysis. Static testing does not involve operation of the system or its components.

Sub categories of static testing include the following:

Consistency techniques: Consistency techniques are used for product analysis of consistency. They ensure correctness of program properties such as correct syntax and correct parameter matching between functions and procedures. Consistency techniques also ensure correctness of program typing and correct requirements and translation of specifications.

Read also  E-business - Literature Review

Measurement techniques: Measurement techniques are used for Doing Measurement of Some Property of the Program. They Measure the System Properties Like Being Error Prone, Being Understandable, and Being Well Structured.

Techniques for verification include the following:


Informal: walkthroughs, inspections, informal prototyping. Informal prototyping is used as a way of verifying the specifications.

Formal: Formal prototyping is used to create some sort of interpreter for the system. (or a simulation of it).

18.3.2 Techniques of Software Validation

One of the popular techniques for early validation systems is a detailed review of its contents performed by experts. This method is often referred to as a walk-through. Walk-through is useful for validating standards which specify, for example, definitions of terms, methodologies, transmission characteristics, electrical properties and other physical attributes.

The main validation techniques implemented in automatic tools are interactive simulation and various types of state space exploration.

Validation process includes the development of test suites that involve a systematic and thorough review of the product. This may be a valuable extension to a walk-through. Test suite development may be used as a validation method when the base specification has not been produced in a formal language and so the effort to develop a validation model would be too difficult.

When it is difficult to produce an simulation model for the product to be validated then the only validation method which can be considered as an alternative to walk through is prototyping.

The major categories of techniques of software validation (Techniques used for validation testing):

Formal methods: Formal methods are used as both verification technique it and validation technique. They involve use of mathematical and logical techniques in order to express, investigate, and analyze the specification, design, documentation and behavior of software systems.

Fault injection: Fault injection is an intentional activation of faults by either hardware or software to observe the system operation under such faulty situations.

Hardware fault injection: Hardware fault injection also is an intentional activation of faults. It is also known as physical fault injection where faults are injected into the physical hardware.

Software fault injection: Software fault injection involves injection of errors into the computer memory through some software techniques. It is a sort of a simulation of hardware fault injection.

Dependency analysis: Dependency analysis involves identification of hazards and subsequently proposing methods to reduce the risk of the hazards.

Hazard analysis: Hazard analysis involves using instructions to identify hazards, their root causes, and possible countermeasures.

Risk analysis: Risk analysis goes beyond hazard analysis by identifying the possible consequences of each hazard and their probability of occurrence.


In this lecture, verification and validation are explained in terms of their definition, the techniques involved, and their purpose. Also, the various approaches to verification, testing, test cases, and design are also discussed.


Descriptive specifications are more abstract specification than operational specification. (TRUE)

Entity-Relationship (E-R) Diagrams is a tool used in operation specification. (TRUE)

An E-R diagram is not suited to designing relational databases. (FALSE)

Developing and E-R diagram consists of 6 steps. (FALSE)

Verification makes sure that the software being developed behaves and produces the desired outcomes in terms of functionality, based on the specified requirements. (TRUE)

Verification cannot be used as a formal proof that an implementation of the model and the corresponding program(s) satisfy the specification. (FALSE)

Validation is carried out at the end of development cycle. (TRUE)

Order Now

Order Now

Type of Paper
Number of Pages
(275 words)