V-Model in Software Testing
The V Model, while admittedly obscure, gives equal weight to testing rather than treating it as an afterthought.
Initially defined by the late Paul Rook in the late 1980s, the V was included in the U.K.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. It's accepted in Europe and the U.K. as a superior alternative to the waterfall model; yet in the U.S., the V Model is often mistaken for the waterfall.
The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side.
In fact, the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirements analysis, high-level design, detailed design and coding. The waterfall model did considerable damage by supporting the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline development activities. Many managers still believe this, even though testing usually takes up half of the project time.
Several testing strategies are available and lead to the following generic characteristics:
1) Testing begins at the unit level and works "outward" toward the integration of the entire system
2) Different testing techniques are appropriate at different points of S/W development cycle.
Testing is divided into four phases as follows:
The context of Unit and Integration testing changes significantly in the Object Oriented (OO) projects. Class Integration testing based on sequence diagrams, state-transition diagrams, class specifications and collaboration diagrams forms the unit and Integration testing phase for OO projects. For Web Applications, Class integration testing identifies the integration of classes to implement certain functionality.
The meaning of system testing and acceptance testing however remains the same in the OO and Web based Applications context also. The test case design for system and acceptance testing however need to handle the OO specific intricacies.
Relation Between Development and Testing Phases
Testing is planned right from the URD stage of the SDLC. The following table indicates the planning of testing at respective stages. For projects of tailored SDLC, the testing activities are also tailored according to the requirements and applicability.
The "V" Diagram indicating this relationship is as follows
DRE: - Where A defects found by testing team. B defects found by customer side people during maintenance.
Refinement from v- model.
To decrease cost and time complexity in development process, small scale and medium scale companies are following a refinement form of VModel.
Software Testing Phases
1. Unit Testing
As per the "V" diagram of SDLC, testing begins with Unit testing. Unit testing makes heavy use of White Box testing techniques, exercising specific paths in a unit’s control structure to ensure complete coverage and maximum error detection.
Unit testing focuses verification effort on the smallest unit of software design - the unit. The units are identified at the detailed design phase of the software development life cycle, and the unit testing can be conducted parallel for multiple units. Five aspects are tested under Unit testing considerations:
- The module interface is tested to ensure that information properly flows into and out of the program unit under test.
- The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution.
- Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.
- All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.
- And finally, all error-handling paths are tested.
Unit Test Coverage Goals:
Path coverage technique is to verify whether each of the possible paths in each of the functions has executed properly. A path is a set of branches of possible flow. Since loop introduces unbounded number of paths, the path coverage technique employs a test that considers only a limited number of looping possibilities.
The statement coverage technique requires that every statement in the program to be evoked at least at once. It verifies coverage at high level rather than decision execution or Boolean expressions. The advantage is this measure can be applied directly to object code & does not require processing source code.
Decision (Logic/Branch) Coverage
The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been considered by a suite of test procedures. It requires that every point of entry & exit in the software program be invoked at least once. It also requires that all possible conditions for a decision in the program be exercised at least once.
This technique seeks to verify the accuracy of true or false outcome of each Boolean sub expression. This technique employs tests that measure the sub expressions independently.
It takes care of covering different conditions, which are interrelated.
Unit Testing (COM/DCOM Technology):
The integral parts covered under unit testing will be:
Active Server Page (ASP) that invokes the ATL component (which in turn can use C++ classes) The actual component Interaction of the component with the persistent store or database and Database tables Driver for the unit testing of a unit belonging to a particular component or subsystem depends on the component alone. Wherever User Interface is available UI called from a web browser will initiate the testing process. If UI is not available then appropriate drivers (code in C++ as an example) will be developed for testing.
Unit testing would also include testing inter-unit functionality within a component. This will consist of two different units belonging to same component interacting with each other. The functionality of such units will be tested with separate unit test(s).
Each unit of functionality will be tested for the following considerations:
Type: Type validation that takes into account things such as a field expecting alphanumeric characters should not allow user input of anything other than that.
Presence: This validation ensures all mandatory fields should be present, they should also be mandated by database by making the column NOT NULL (this can be verified from the low-level design document).
Size: This validation ensures the size limit for a float or variable character string input from the user not to exceed the size allowed by the database for the respective column.
Validation: This is for any other business validation that should be applied to a specific field or for a field that is dependent on another field. (E.g.: Range validation – Body temperature should not exceed 106 degree Celsius), duplicate check etc.
GUI based: In case the unit is UI based, GUI related consistency check like font sizes, background color, window sizes, message & error boxes will be checked.
2. Integration Testing
After unit testing, modules shall be assembled or integrated to form the complete software package as indicated by the high level design. Integration testing is a systematic technique for verifying the software structure and sequence of execution while conducting tests to uncover errors associated with interfacing.
Black-box test case design techniques are the most prevalent during integration, although limited amount of white box testing may be used to ensure coverage of major control paths. Integration testing is sub-divided as follows:
i) Top-Down Integration Testing: Top-Down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
ii) Bottom-Up Integration Testing:Bottom-Up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Since modules are integrated from the bottom up, processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated.
iii)Integration Testing for OO projects:
Thread Based Testing
Thread based testing follows an execution thread through objects to ensure that classes collaborate correctly.
In thread based testing
Use Based Testing
- Set of class required to respond to one input or event for system are identified;
- each thread is integrated and tested individually
- Regression test is applied to ensure that no side effects occur
Use based testing evaluates the system in layers. The common practice is to employ the use cases to drive the validation process
In Use Based Testing
This sequence is repeated by adding and testing next layer of dependent classes until entire system is tested.
Initially independent classes (i.e., classes that use very few other classes) are integrated and tested.
- Followed by the dependent classes that use independent classes. Here dependent classes with a layered approach are used
- Followed by testing next layer of (dependent) classes that use the independent classes
Integration Testing for Web applications:
Collaboration diagrams, screens and report layouts are matched to OOAD and associated class integration test case report is generated.
3. Regression Testing
Each time a new module is added as part of integration testing, new data flow paths may be established, new I/O may occur, and new control logic may be invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
Regression testing may be conducted manually, by re-executing a subset of all test cases. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
As integration testing proceeds the number of regression tests can grow quite large. Therefore, the regression test suite shall be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.
A representative sample of tests that will exercise all software functions.
- Additional tests that focus on software functions and are likely to be affected by the change.
- Tests that focus on the software components that have been changed.
4. System Testing
After the software has been integrated (constructed), sets of high order tests shall be conducted. System testing verifies that all elements mesh properly and the overall system function/performance is achieved.
The purpose of system testing is to fully exercise the computer-based system. The aim is to verify that all system elements and validate conformance against SRS. System testing is categorized into the following 20 types. The type(s) of testing shall be chosen depending on the customer / system requirements.
Different types of Tests that comes under System Testing are listed below:
- Compatibility / Conversion Testing: In cases where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested. Likewise, the conversion procedures from the existing system to the new software are to be tested.
- Configuration Testing: Configuration testing includes either or both of the following:
- testing the software with the different possible hardware configurations
- testing each possible configuration of the software
If the software itself can be configured (e.g., components of the program can be omitted or placed in separate processors), each possible configuration of the software should be tested.
If the software supports a variety of hardware configurations (e.g., different types of I/O devices, communication lines, memory sizes), then the software should be tested with each type of hardware device and with the minimum and maximum configuration.
- Documentation Testing:
Documentation testing is concerned with the accuracy of the user documentation. This involves
i) Review of the user documentation for accuracy and clarity
ii)Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system
- Facility Testing:Facility Testing is the determination of whether each facility (or functionality) mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.
- Installability Testing:Certain software systems will have complicated procedures for installing the system. For instance, the system generation (sysgen) process in IBM Mainframes. The testing of these installation procedures is part of System Testing.
Proper Packaging of application, configuration of various third party software and database parameters settings are some issues important for easy installation.
It may not be practical to devise test cases for certain reliability factors. For e.g., if a system has a downtime objective of two hours or less per forty years of operation, then there is no known way of testing this reliability factor.
- Performance Testing:
Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing. Even at the unit level, the performance of an individual module is assessed as white-box tests are conducted. However the performance testing is complete when all system elements are fully integrated and the true performance of the system is ascertained as per the customer requirements.
- Performance Testing for Web Applications:
The most realistic strategy for rolling out a Web application is to do so in phases. Performance testing must be an integral part of designing, building, and maintaining Web applications.
i) Automated testing tools play a critical role in measuring, predicting, and controlling application performance. There is a paragraph on automated tools available for testing Web Applications at the end of this document.
In the most basic terms, the final goal for any Web application set for high-volume use is for users to consistently have
i) continuous availability
ii) consistent response times—even during peak usage times.
Performance testing has five manageable phases:
iii) architecture validation
iv) performance benchmarking
v) performance regression
vi) performance tuning and acceptance
vii) and the continuous performance monitoring necessary to control performance and manage growth.
- Procedure Testing:
If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed by
i) The human operator
ii) Database administrator
iii) Terminal user
These procedures are to be tested as part of System testing.
- Recovery Testing:Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialisation, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the time required to repair is evaluated to determine whether it is within acceptable limits.
- Reliability Testing:The various software-testing processes have the goal to test the software reliability. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS.
However, if the reliability factors are stated as say, Mean-time-to-failure (MTTF) is 20 hours, it is possible to device test cases using mathematical models.
- Security Testing:Security testing attempts to verify that protection mechanisms built into a system will protect it from improper penetration. During security testing, the tester plays the role(s) of the individual who desires to penetrate the system. Security testing involves designing test cases that try to penetrate into the system using all possible mechanisms.
- Security Testing (Web applications):
In case of web applications, one has take into account testing with appropriate firewall set-up. For data security, one has to take into consideration Data Transfer Checksum, Encryption or use of digital certificates, MD5 hashing on all vulnerable data and database integrity. For User security, encrypted passwords, audit trail logs containing (who, where, why, when and what information), auto log out based on system specifications (e.g. 5 minutes of inactivity), display of user information on the UI can be taken care by designing to code programmatically.
- Serviceability Testing:
Serviceability testing covers the serviceability or maintainability characteristics of the software. The requirements stated in the SRS may include
i) service aids to be provided with the system, e.g., storage-dump programs, diagnostic programs
ii) the mean time to debug an apparent problem
iii) the maintenance procedures for the system
iv) the quality of the internal-logic documentation
Test cases are to be devised to ensure the coverage of the stated aspects.
- Storage Testing:
Storage testing is to ensure that the storage requirements are within the specified bounds. For instance, the amounts of the primary and secondary storage the software requires and the sizes of temporary files that get created.
- Stress Testing:
Stress tests are designed to confront programs with abnormal situations. Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume. Test cases may be tailored by keeping some of the following examples in view:
i) Input data rates may be increased by an order of magnitude to determine how input functions will respond
ii) Test cases that may cause excessive hunting
iii) Test cases that may cause thrashing in a virtual operating system may be designed
iv) Test cases that may create disk resident data.
v) Test cases that require maximum memory or other resources may be executed
To achieve this, the software is subjected to heavy volumes of data and the behaviour is observed.
- Stress Testing (Web applications):
This refers to testing system functionality while the system is under unusually heavy or peak load; it is similar to the validation testing but is carried out in a "high-stress" environment. This requires some idea about expected load levels of the Web application. One of the criteria for web applications would be number of concurrent users using the application.
- Usability Testing:
Usability testing is an attempt to uncover the software usability problems involving the human-factor.
i) Has each user interface is amicable to the intelligence, educational background, and environmental pressures of the end user?
ii) Are the outputs of the program meaningful, useable, storable, etc.?
iii) Are the error messages meaningful, easy to understand?
- Usability Testing (Web Applications):
The intended audience will determine the "usability" testing needs of the Web site. Additionally, such testing should take into account the current state of the Web and Web culture.
- Volume Testing:
Volume Testing is to ensure that the software
i) can handle the volume of data as specified in the SRS
ii) does not crash with heavy volumes of data, but gives an appropriate message and/or makes a clean exit.
To achieve this, the software is subjected to heavy volumes of data and the behaviour is observed.
i) A compiler would be fed an absurdly large source program to compile
ii) A linkage editor might be fed a program containing thousands of modules
iii) An operating system's job queue would be filled to capacity
iv) If a software is supposed to handle files spanning multiple volumes, enough data are created to cause the program to switch from one volume to another
As a whole, the test cases shall try to test the extreme capabilities of the programs and attempt to break the program so as to establish a sturdy system.
- Link testing (for web based applications):
This type of testing determines if the site's links to internal and external Web pages are working. A Web site with many links to outside sites will need regularly scheduled link testing, because Web sites come and go and URLs change. Sites with many internal links (such as an enterprise-wide Intranet, which may have thousands of internal links) may also require frequent link testing.
- HTML validation (for web based applications):
The need for this type of testing will be determined by the intended audience, the type of browser(s) expected to be used, whether the site delivers pages based on browser type or targets a common denominator. There should be adherence to the HTML programming guidelines as defined in Qualify.
- Load testing (for web based applications):
If there is a large number of interactions per unit time on the Web site testing must be performed under a range of loads to determine at what point the system's response time degrades or fails. The Web server software and configuration settings, CGI scripts, database design, and other factors can all have an impact.
- Validation or functional testing (for web applications):
This is typically a core aspect of testing to determine if the Web site functions correctly as per the requirements specifications. Sites utilising CGI-based dynamic page generation or database-driven page generation will often require more extensive validation testing than static-page Web sites.
- Extensibility Promote-ability Testing:
Software can be moved from one run-time environment to another without requiring modifications to the software, e.g. the application can move from the development environment to a separate test environment.
5. Acceptance Testing
When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all the requirements. Acceptance tests are conducted at the development site or at the customer site depending upon the requirements and mutually agreed principles. Acceptance testing may be conducted either by the customer depending on the type of project & the contractual agreement. A series of acceptance tests are conducted to enable the customer to validate all requirements as per user requirement document (URD).
Back to Software Testing Models