Roles and Responsibility of a Quality Analyst
A quality analyst is a proxy owner of the software acting as a client in the project. The aim of the testers is to put themselves in the shoes of actual users. Their expectations, preferences etc. the quality analyst has to test the software with the intention of breaking it. The aim is to analyze the loopholes and other short comings which the application cannot withstand. Broadly, the following things are taken care by a quality analyst:
- Inspecting and validate the actual deliverable.
- Audit the standards and the processes being followed.
- Establish the quality centric metrics, both for the project and the resources.
- Formulate the test plans, their results, and quality assurance practices.
- Identify the areas of non compliance.
- Preparing reports
- Set up specifications for the work flow
- ·Take preventive actions to minimize the room for bugs, take necessary corrective actions to get them eliminated.
- Documenting the scripts containing the black box and gray box test cases.
Types of Testing
1. Functional or Black Box : This is carried out in order to verify if the system is behaving according to the expectations. Various possible inputs are given to it to see whether it behaves erratically or throws an exception at any point. The actual output is compared to expected output and the discrepancies are reported.
2. Structural Testing or White Box: It is ideally performed by the coder himself by unit testing each module that has been designed. In TDD approach, the specifications for the function’s intended behavior are written first in the form of unit tests, then the actual code is written, enhanced and further re factored.
3. Unit Testing: Each unit, the independent modules of the code are tested in isolation to examine their behavior. It is a kind of structural testing only. Executing the tests on individual methods should not affect or get affected by the rest of the system.
4. Integration Testing: Also referred to as the top down approach, the functions individually might be working as designed and living up to the expectations, but it is also equally important to take note of the big picture. How, the units behave when they are combined or integrated.
5. Regression Testing: It aims at locating new faults, or re-emerged bugs that previously existed in the application and have come back due to changes in the release specific configuration or small patches of fixed bugs.
6. Performance Testing: Done in order to check the scalability and extensiveness of the system. Load is incrementally increased over the system to check whether it can sustain its normal behavior in those conditions or not.
7. Smoke Testing: Smoking testing is carried out to ensure that the basic functionality of the application is in sync with the expectations, the lower level fine details are not be inspected upon. After passing the smoke test, the system is ready to be migrated to other levels of testing or for UAT.
8. Sanity Testing: It is the narrow level of regression testing. After a small change or a minor bug fix has been done, the narrow portion of the application is tested against any kind of malfunctioning or abrupt behavior. It is kind of an adhoc testing which is not scripted before.
9. Exploratory Testing: All the test scenarios cannot be anticipated and documented. While walking through the application, there are certain things and features that uncover themselves. It goes like, the more you explore, the better and deeper you can test. Quite a significant percentage of bugs are tapped during exploratory testing.
10. Security Testing: Is done to uncover the security loopholes in the existing system. Whether correct measures have been taken to prevent virus attacks, data corruption, authentication, integrity, confidentiality etc.
11. Alpha Testing: Just before the release of a particular feature, that is, its development has frozen and it could pass the structural testing. The QA performs an alpha testing.
12. Beta Testing: Done by the actual users at their actual location with the help of user manuals and other documents provided by the software development team. A lot of performance issues are observed and reported during beta testing.
Challenges faced by Quality Analysts
- ·The first and the underlying limitation of the testing process is that it can never be complete or exhaustive. Beyond a certain point, the testers, being humans are not able to find and test the scenarios which can fail the application.
- ·Getting the ideal test conditions and environment. The application can work in a full proof flawless manner, but it may not be the same at the actual user location.
- ·Testing the user interface. The screen resolution, graphics and other features are browser dependent. It is not possible for the quality analyst/team to check the behavior on all the browsers and get them fixed.