Bugs life cycle
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The
elimination of bugs from the software depends upon the efficiency of testing done on the software.
A bug is a specific concern about the quality of the Application under Test.
13.1 New
When the bug is posted for the first time, its state will be “new”. This means bug is not yet approved.
13.2 Deferred
If the bug is not related to current build or cannot be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
13.3 Assigned
‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
13.4 Resolved/Fixed
When developer makes necessary code changes and verifies the changes then he/she can make bug
status as ‘Fixed’ and the bug is passed to testing team.
13.5 Could not reproduce
If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
13.6 Need more information
If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then
he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
13.7 Reopen
If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
13.8 Closed
If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
13.9 Rejected/Invalid
Some times developer or team lead can mark the bug as Rejected or invalid if the system is working
according to specifications and bug is just due to some misinterpretation.
Entry & Exit Criteria
Entry and Exit criteria are required to start and end the testing. It is must for the success of any project.
If you do not know where to start and where to finish then your goals are not clear. By defining exit and entry criteria you define your boundaries. For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If these entry criteria are not met then you will not start the project. On the other end, you can also define exit criteria for your project. For
instance, one of the common exit criteria in projects is that the customer has successfully executed
the acceptance test plan.
11.1 Entry Criteria
The entry criteria defines what all need to start the testing. It is very necessary to know for tester /QA
what should be start criteria for entering into testing phase.
In general, entry criteria is a set of conditions that permits a task to perform, or in absence of any among these condition will not allow to perform that task is taken as the Entry Criteria of that task.
Entry criteria conditions should be:
All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
Functional and Business requirement should be cleared, confirmed and approved.
Test plan, test cases reviewed and approved.
Test environment/test ware gets prepared
Test data should be available
Availability of application
QA/Tester gets significant knowledge of application.
Resources should be ready
11.2 Exit Criteria
The Exit criteria is a set of conditions based on which you can say this particular task is finished.
This can be difficult to determine. Many modern software applications are so complex, and run in such
as interdependent environment, that complete testing can never be done. "When to stop testing" is one
of the most difficult questions to a test engineer.
Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
All defects are fixed or closed
Closure reports are singed off
All the test cases have been executed and passed
Beta or alpha testing period ends
Budget allocated for testing is exhausted
The risk in the project is under acceptable limit.
Managing The Required Test Resource
It is responsibilities of a person or group of person to manage the resource that will be required
During the testing.
Design System Test Cases involves identifying Test Cycles, Test Cases, Entrance & Exit Criteria, Expected Results, etc. In general, test conditions/expected results will be identified by the Test
Team in conjunction with the Development Team. The Test Team will then identify Test Cases
and the Data required. The Test conditions are derived from the Program Specifications Document.
Build System Test Cases In which the tester writes the test cases for the application and also write
the actual and expected result. Test cases help the testers during the testing process.
Executing Test Cases In which the senior person verify the build test cases and give it to status like
PASS and FAIL.
Analyze Test Result In which the most senior person analyze the all build and executed test cases.
Build Test Environment includes requesting/building hardware, software and data set-ups.
Issue Tests Report – The tests identified in the Design/Build Test Procedures will be executed. All
results will be documented and Bug Report Forms filled out and given to the Development Team as necessary.
Signoff - Signoff happens when all pre-defined exit criteria have been achieved.
8. Risks and contingencies
8.1 What is risk
“Risk are future uncertain events with a probability of occurrence and a potential for loss”.Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
8.2 Types of risks
Risks are identified, classified and managed before actual execution of program. These risks are
classified in different categories.
Schedule Risk, Budget Risk,Categories of risks, Operational Risks, Technical risks, Programmatic Risks.
8.2.1 Schedule Risk
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:
Wrong time estimation
Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
Failure to identify complex functionalities and time required to develop those functionalities.
8.2.2 Budget Risk:
Wrong budget estimation.
Cost overruns
Project scope expansion
8.2.3 Operational Risks
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in team.
8.2.4 Technical risks
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
Continuous changing requirements
No advanced technology available or the existing technology is in initial stages.
Product is complex to implement.
Difficult project modules integration.
8.2.5 External Risks
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:
Running out of fund.
Market development
Changing customer product strategy and priority
Government rule changes.
7. Hardware & Software Required
7.1 Window Based Software/Application Resources
Window based applications need to be install on your machine to access. Window based application
don’t have validation controls. Window based programs are run through the operating system.
Following are the some window based software resources that need a tester during testing.
Test Item
MS Excel.
MS Office
Paint.
All Browsers.
Jing.
Screen Capture.
IE Tester.
7.2 Web Based Software/Application Resources
Web based application can be access from anywhere in the world. It is run on web server.web service
and web sites are web based applications.web application uses the authentication and authorization
process. It has validation control.web based programs are run through web browsers. Following are the some web based software resources that need a tester during testing.
TCMS (Test Case management system).
Google Docs.
7.3 Test Management tool:
TestRail & MS Excel is used for management of test cases.
7.4 Bug Tracking Tools
It provides a communication between developer and tester. Tester reports the bug in Bug tracking tool. Following are some tools that are used for report the bugs. Tools are used by the company requirement. tools also have versions and patches.
Mantis.
7.5 Automation Tools
In software testing, test automation is the use of special software (separate from the software being
tested) to control the execution of tests. In which tester create a script once time and executed it more
time. It saves the times and resource. Using a test automation tool it’s possible to record test suite and
re-play it as required.
Tools are used by the company requirement.
Selenium.
7.6 Operating System
Operating System is also used according to the requirement of the applications or websites.
Window 7
Window XP
Linux (Fedora/Ubuntu)
7.7 Testing Server
Testing servers are a useful addition to a website workflow because they provide a way to test new
pages and designs on a web server that is not visible to customers. A testing server is very useful for sites that use a lot of dynamic content, programming, or CGIs. This is because unless you have a server and
database set up on your local computer; it is very difficult to test these pages offline. With a testing
server, you can post your changes to the site and then see if the programs, scripts, or database still
works as you intended.
Companies that have a testing server typically add it to the workflow like this:
Designer builds the site locally and tests locally, just like above.
Designer or developer uploads changes to the testing server to test dynamic elements (PHP or other
server-side scripts ,CGI, and Ajax).
Approved designs are moved to the production server.
7.8 Hardware Resources
As per the requirement of application or website.
6. Types of Testing
6.1Black box Testing
A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality.
It is performed by QA teams.
6.2 White box Testing
Testing technique based on knowledge of the internal logic of an application’s code and includes tests
Like coverage of code statements, branches, paths, conditions. It is performed by software developers.
6.3 Functional Testing
Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams.
6.4 Active Testing
Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing teams.
6.5 Agile Testing
Software testing practice that follows the principles of the agile manifesto, emphasizing testing from the
perspective of customers who will utilize the system. It is usually performed by the QA teams.
6.6 Ad-hoc Testing
Testing performed without planning and documentation - the tester tries to 'break' the system by
randomly trying the system's functionality. It is performed by the testing teams.
6.7 Alpha Testing
Type of testing a software product or system conducted at the developer's site. Usually it is performed
by the end user.
6.8 Beta Testing
Final testing before releasing application for commercial purpose. It is typically done by end-users or others.
6.9 Acceptance Testing
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to
enable the customer to determine whether or not to accept the system. It is usually performed by the customer.
6.10 Big Bang Integration Testing
Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams.
6.11 Boundary Value Testing
It is a technique to find whether the application is accepting the excepting range of values and rejecting
the values which fall out of range.
6.12Compatibility Testing
Testing technique that validates how well software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams.
6.13 Browser Compatibility Testing
Testing technique that validates how well a software performs in a particular browser.
6.14 Component Testing:
Testing technique similar to unit testing but with a higher level of integration - testing is done in the
context of the application instead of just directly testing a specific method. Can be performed by
testing or development teams.
6.15 Domain Testing:
White box testing technique which contains checking that the program accepts only valid input.
It is usually done by software development teams and occasionally by automation testing teams.
6.16 End-to-end Testing
Similar to system testing, involves testing of a complete application environment in a situation that
mimics real-world use, such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams.
6.17 Endurance Testing
Type of testing which checks for memory leaks or other problem that may occur with prolonged
execution. It is usually performed by QA Engineer.
6.18 Equivalence Partitioning Testing
In this type of testing we divide the input into different types. And verify the application or software
for a single value among different types. If that single value is able to run the application then it is ok
else there is a bug
6.19 Fuzz Testing
Software testing technique that provides invalid, unexpected, or random data to the inputs of a
program - a special area of mutation testing. Fuzz testing is performed by testing teams.
6.20 GUI software Testing
The process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done by the testing teams.
6.21Globalization Testing
Testing method that checks proper functionality of the product with any of theculture/locale settings
using every type of international input possible. It is performed by the testing team(I5lam.com).
6.22 Integration Testing
In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing.
There are mainly three approaches to do integration testing.
Top down approach:
Top down approach tests the integration from top to bottom, it follows the architectural structure.
Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on.
Bottom up approach testing:
In bottom up approach testing takes place from the bottom of the control flow, the higher level
components are substituted with drivers
6.23 Internationalization Testing
The process which ensures that product’s functionality is not broken and all the messages are properly externalized when used in different languages and locale. It is usually performed by the testing teams.
6.24 Localization Testing
Part of software testing process focused on adapting a globalized application to a particular
culture/locale. It is normally done by the testing teams.
6.25 Manual-Support Testing
Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. It is conducted by testing teams.
6.26 Negative Testing
Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers.
6.27 Parallel Testing
Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team.
6.28 Regression Testing
Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug
fixes or new functionality) have been made, by retesting the program. It is performed by the testing
teams.
6.29 Requirements Testing
Testing technique which validates that the requirements are correct, complete, unambiguous, and
Logically consistent and allows designing a necessary and sufficient set of test cases from those requirements. It is performed by QA teams.
6.30 Sanity Testing
Testing technique which determines if a new software version is performing well enough to accept it
for a major testing effort. It is performed by the testing teams.
6.31 Smoke Testing
Testing technique which examines all the basic components of a software system to ensure that they
work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made .
6.34 Stress Testing
Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer.
6.35 System Testing
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment.
6.36.Unit Testing:
Software verification and validation method in which a programmer tests if individual units of source
code are fit for use. It is usually conducted by the development team.
6.37 Usability Testing:
Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users.
5. Level of Testing
There are different level of software testing Defined by a given environment Environment is a
collection of people, hardware, software, interfaces, data etc.
The different level of testing are defined as
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Regression testing
5.1 Unit Testing
A unit is smallest testable piece of software can be compiled, linked, loaded
e.g functions/procedures, classes, interfaces – normally done by programmer
Test cases written after coding
Buddy testing is prefer to cover unit testing.
5.2 Buddy Testing
Team approach to coding and testing. One programmer codes the other tests and vice versa
Test cases ‐ written by tester(before coding starts).
Better than single worker approach
Objectivity
cross‐training
Models program specification requirement
5.3 Integration Testing
Test for correct interaction between different units systems ‐ built by merging existing libraries
modules coded by different people.
Bottom up integration testing
Use of drivers
Top down integration testing
Use of stubs
It is done by developers/testers
Test cases written when detailed specification is ready
Test continuous throughout project
5.4 System Testing
Test of overall interaction of components
Find disparities between implementation and specification
Usually where most resources go to
Involves – load, performance, reliability and security testing
It is done by test team
Test cases written when high level design is ready
It is done on a system test machine
5.5 Acceptance Testing
Demonstrates satisfaction of user
Users are essential part of process
Done by test team and customer
Done in simulated environment/real environment
5.6 Regression Testing
On going process throughout testing lifecycle
New bug‐fix breaks previously tested units?
Perform regression test whenever program changes
4. Testing Methodology
There are numerous methodologies available for developing and testing software. The methodology you
choose depends on factors such as the nature of project, the project schedule, and resource availability.
Although most software development projects involve periodic testing, some methodologies focus on
getting the input from testing early in the cycle rather than waiting for input when a working model of the system is ready. Those methodologies that require early test involvement have several advantages, but also involve tradeoffs in terms of project management, schedule, customer interaction, budget, and communication among team members.
Testing fits into the various software development life cycles. But today’s we used the Agile Methodology & V Model:
4.1 Agile Methodology
Agile as the name refers implies something to do very quickly. Hence Agile Testing refers to validate the client requirements as soon as possible and make it customer friendly. As soon as the build is out, testing is expected to get started and report the bugs quickly if any found.
Emphasis has to be laid down on the quality of the deliverable in spite of short timeframe which will further help in reducing the cost of development.
Advantages of Agile Methodology:
The very first advantage that the company got to see with the Agile Methodology is the saving of time and money. There is less documentation required though documents help to a great deal in verifying and validating the requirements but considering the time frame of the project, this approach leads to focus more on the application rather than documenting the things. It tends to have a regular feedback from the end user so that the same can be implemented as soon as possible.
Agile Methodology offers to other approaches available is that in case there is any Change request or
enhancements come in between any phase, it can be implemented without any budget constrain.
Though it is useful for any Programming language or Technology around, it is advisable to make it
Employ For Web 2.0 or the projects which are new in media.
Daily meetings and discussions for the project following Agile approach can help to determine the
issues well in advance and work on it accordingly. Quick coding and Testing makes the management
aware of the gaps existing in either requirements or technology used and can try to find the workaround
for the same. Hence, with the quicker development, testing and constant feedbacks from the user, the
Agile methodology becomes the appropriate approach for the projects to be delivered in a short span of time.
4.2 V Model
The V-Model, also called the Vee-Model, is a product-development process originally developed in Germany for government defense projects.
The development process proceeds from the upper left point of the V toward the right, ending at the upper right point. In the left-hand, downward-sloping branch of the V, development personnel define business requirements, application design parameters and design processes. At the base point of the V, the code is written. In the right-hand, upward-sloping branch of the V, testing and debugging is done. The Unit Testing is carried out first, followed by bottom-up integration testing. The extreme upper right point of the V represents product release and ongoing support.
4.2.1 Unit testing
In computer programming, unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers. The purpose is to verify the internal logic code by testing every possible branch within the function, also known as test coverage. Static analysis tools are used to facilitate in this process, where variations of input data are passed to the function to test every possible case of execution.
4.2.1 Integration testing
In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors.
4.2.2 System testing
After the integration test is completed, the next test level is the system test. System testing checks if the
integrated product meets the specified requirements. Why is this still necessary after the component and
integration tests?
The reasons for this are as follows:
In the lower test levels, the testing was done against technical specifications, i.e., from the
Technical perspective of the software producer.
the system test, though, looks at the system from the perspective of the customer and the future
user.
The testers validate whether the requirements are completely and appropriately met.
Many functions and system characteristics result from the interaction of all system components,
consequently, they are only visible on the level of the entire system and can only be observed
and tested there.
4.2.4 User acceptance testing
Acceptance testing is the phase of testing used to determine whether a system satisfies the requirements
specified in the requirements analysis phase. The acceptance test design is derived from the requirements
document. The acceptance test phase is the phase used by the customer to determine whether to accept
the system or not.
Acceptance testing helps:
To determine whether a system satisfies its acceptance criteria or not.
To enable the customer to determine whether to accept the system or not.
To test the software in the "real world" by the intended audience.
To determine system changes according to the original needs.
3. Steps of Testing
The Steps are to be followed from the start of Testing of software to the end of the testing as follows:
1- Before the dynamic testing, there is a static testing. Static testing includes review of documents required for the software development. This includes following activities:
(a) All the documents related to customer requirements and business rules that are required for software
design and development should be handed over to QA. These documents should be complete and dually
signed by client and representative of the company (usually Project Manager).
(b) QA reviews these documents. The reviewing of documents includes comprehensive and thorough study of the documents. If any discrepancy is found then it should be noted and raised in the review meeting with the Development team.
(c) After this there should be a formal meeting between the QA and development team regarding these documents, the agenda of this meeting mainly includes what is missing in the document, QA queries to be answered by Development/Project Team and/or clarification required for any confusions.
2- After the Software development or build of a module, QA starts dynamic testing. If during the
development the requirement has been changed on customer demand or due to any other reason, then that
should be documented and a copy of this revised document is given to the QA and also discussed as
explained in point 1 above.
3- Development and Testing environment should be made clear to the QA by the Development team.
It include the following activities:
(a)- Server to hit for Testing
(b)- Installation of latest build on the test server.
(c)- Modules/Screens to test.
(d)- Test duration as decided by test manager and project manager mutually based on scope
of work and team strength.
(e)– Demo of the software on test server by development team to the QC members.
4- After this Test cases and test scenarios are prepared and then the Test execution by QC.
5- A comprehensive Report of Bugs is prepared by the Testers and a review/verification by QC/QA/Testing Head takes place. Before handing over this report to Development Team there is a thorough review of Bugs List by Test Manager and in case of any clarification required on a bug submitted, the Testing Head discusses the bugs with the assigned tester.
6- Release of bug report by QC Team to Development Team.
7- Discussion/simulation of bugs by QC with development team if development team requires and time
required for fixing the bugs should be made clear by Dev team at this stage.
8- Feedback from Development team on reported bugs with the stipulated time frame required to fix all bugs.
9- Any changes in the software being made in respect to fix these bugs should be made clear to the QA team by the Development team.
10- Testing team then Retests or verifies the bugs fixed by the development team.
11- Submitting the retesting bug report to the Test manager and after this the step 5 to step 10 are followed until the product has reached a stage where it can be released to customer.
12- Criteria for ending the testing should be defined by management or Test Manager Like when all major bugs are reported and fixed. Major bugs mean the bugs that affect the Business of the Client.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the
Steps to be executed, Expected result, Actual result, Pass/Fail.
1. Introduction
1.1 SCOPE
The overall purpose of testing is to ensure the {xyz… application} application meets all of its
technical, functional and business requirements. The purpose of this document is to describe the
overall test plan and strategy for testing the {xyz… application} application. The approach described
in this document provides the framework for all testing related to this application. Individual test cases
will be written for each version of the application that is released. This document will also be updated
as required for each release.
Factors influencing test scope
Size of project
Complexity of project
Budget for project
Time scope for project
Number of staff
Why test at different levels
Software development naturally split to phases
Easily track bugs
Ensures a working subsystem/ component/ library
Software reuse more practical
1.2 TEST OBJECTIVES
The quality objectives of testing the {xyz… application} application are to ensure complete validation of
The business and software requirements:
Verify software requirements are complete and accurate
Perform detailed test planning
Identify testing standards and procedures that will be used on the project
Prepare and document test scenarios and test cases
Regression testing to validate that unchanged functionality has not been affected by changes
Manage defect tracking process
Provide test metrics/testing summary reports
Schedule Go/No Go meeting
Require sign-offs from all stakeholders
1.3 TESTING GOALS
The goals in testing this application include validating the quality, usability, reliability and performance
of the application. Testing will be performed from a black-box approach, not based on any knowledge of
internal design or code.Tests will be designed around requirements and functionality.
Another goal is to make the tests repeatable for use in regression testing during the project lifecycle, and
for future application upgrades. A part of the approach in testing will be to initially perform
a ‘Smoke Test’ upon delivery of the application for testing. Smoke Testing is typically an initial
testing effort to determine if a new software version is performing well enough to accept it for a
major testing effort. After acceptance of the build delivered for system testing, functions will be tested
based upon the designated priority (critical, high, medium, low).
1.4 Quality
Quality software is reasonably bug-free, meets requirements and/or expectations, and is maintainable.
Testing the quality of the application will be a two-step process of independent verification and
validation.
First, a verification process will be undertaken involving reviews and meetings to evaluate documents,
plans, requirements, and specifications to ensure that the end result of the application is testable, and
that requirements are covered. The overall goal is to ensure that the requirements are clear, complete,
detailed, cohesive, attainable, and testable. In addition, this helps to ensure that requirements are agreed
to by all stakeholders.
Second, actual testing will be performed to ensure that the requirements are met. The standard by which
the application meets quality expectations will be based upon the requirements test matrix, use cases and
test cases to ensure test case coverage of the requirements. This testing process will also help to ensure
the utility of the application – i.e., the design’s functionality and “does the application do what the users
need?”
1.5 Reliability
Reliability is both the consistency and repeatability of the application. A large part of testing an
application involves validating its reliability in its functions, data, and system availability. To ensure
reliability, the test approach will include positive and negative (break-it) functional tests. In addition,
to ensure reliability throughout the iterative software development cycle, regression tests will be
performed on all iterations of the application.
1.6. Purpose of The Test Plan Document
The Project QA/Test Plan is an Master Test plan that encompasses the entire testing required for a
project. It is the highest level testing plan that documents the testing strategy for the project, describes
the general approach that will be adopted in testing , provides the overall structure and philosophy for
any other required testing documents, and defines what will be tested to assure the requirements of the
product, service, or system (i.e., the deliverables) are met.
Based on the size, complexity, and type of project, subsequent test plans may be created and used
during the testing process to verify that the deliverable works correctly, is understandable, and is
connected logically. These test plans include the following categories and types. Testing
areas/coordinators would have templates specific to these plans.
Types of quality assurance planning include:
Process QA plans document how the project is meeting quality and compliance standards ensuring
the right documentation exists to guide project delivery and corrective actions, and
Requirements/application test plans document how the product, service or system meets stated
business and technical requirements, and how it will work within the defined operational/business
processes and work flow.
The Project QA/Test Plan is completed during the Design phase of the Solution Delivery Life Cycle.
It should be updated anytime additional information or project changes affect its content.
It is a required deliverable on Large and Medium projects, and a best practice on Fast Track projects.
For additional guidance, the Project Classification Worksheet is available.
This template is provided as a guideline to follow in producing the minimum basic information needed
to successfully complete a Project QA/Test Plan.
Project Managers are empowered to use this template as needed to address any specific requirements of
the proposed project at hand. The amount of detail included in the template will depend on the size and
complexity of the project.