NURV’s QA methodology builds upon widely acknowledged industry standards of QA and is made up from the following processes, known as the QA life-cycle:
A thorough review of the processes and design of the product/ application. Subject matter specialists and designers perform this exercise. Architectural, Development and Implementation issues are discussed and frozen at this point in time. Our QA architects get involved at this stage to determine the potential influence of architecture and design on the QA process, platform and tools selection.
Our QA engineers develop test cases to target each desirable and undesirable function of the software. Each function of the software is targeted and tested in separation from other functions of the software.
In this phase, we ensure that a bug fix or modification of software doesn’t have undesirable impact on other parts or functions of the software. A mix of test cases involving different functions of the software form a regression bucket. This regression bucket needs to be executed each time a bug fix or software modification is released to QA. Automated testing tools are especially useful for this regression testing.
This is more of a customer (or user) side function. We ensure that the software system meets the set criterion of quality before being accepted or released. The acceptable criterion of quality could be based on results from previous phases of QA (system testing or performance testing etc.) or may involve entirely different testing phase independent of previous testing phases performed.
The management of the QA process starts with the design reviews and extends up to the implementation and post implementation monitoring. The inter-workings of design reviews, code reviews and release engineering are crucial processes to manage as they are part of the interaction between software development team and QA engineering… Configuration management, which entails sorting out development environment, test environment and a production environment also concerns QA process management.. With various kinds of technologies in place and with n-tier architecture, it is often more complex than it seems like.
The most ‘micro’ scale of testing to test particular code modules. Typically done by programmers and not by QA engineers, as it requires detailed knowledge of the internal program code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. Code walkthrough entails looking at a programmer code. The idea of code walkthrough also ensures that a programmer has used good coding practices and established coding standards.
We test the combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. An automated tool can also be used to do an integration testing.
Performance Testing, Stress/Load testing
An application is put under heavy loads to determine at what point the software systems response time degrades or fails. The purpose is to determine the capability limitation of the software system.
QA Testing Process Management
Timely delivery of a quality software system depends heavily on the effective management of the QA testing process along with the Software Development process. Managing the QA testing process involves setting up of test environment, test plan development and execution, QA and Software development team interaction, defect tracking and fix release acceptance, minimum quality criterion determination for releases and delivery.
Manage the Quality Assurance process for a software project. Building the appropriate control process, implementing it and managing it.