Software Quality Assurance Testing is an integral and critical phase of any software development project. Developers and testers must ensure that
newly developed products or product enhancements meet functional and performance requirements
the products are reliable and able to operate consistently under peak loads.
The risks of releasing a product that is not yet ready are greater now than ever before because of the change in expectations and demands of the end users.
According to the Standish group’s research report on Project Failure and Success nearly three out of four software projects are either delivered late, over budget or are cancelled before being completed. This is true despite the involvement of experienced managers, developers and testers in the project. This is a problem that continues till date. The anxiety induced by the question “Are we ready to release?” affects every member of the team. Management dreads having to ask it for fear of hearing an unqualified “No” or even worse.
There are two major industry trends adding to the pressure. The first is accelerated release cycles. As Business Week put it, “A year’s worth of change happens in a couple of months-a pace known as ‘Internet Time.’ And that’s the problem. The whole industry is operating on Internet Time…”
Second, while releases are more frequent and cycles shorter, the cost of failure has increased dramatically. Just a few years ago, when client server products were at the cutting edge, releases were, perhaps, annual and the expected number of users was known well in advance, because all the users were employees. An organization could mitigate a system failure with a manual backup; for example, orders could be taken by hand while the system was down.
But as huge portions of the business were overhauled, and these systems addressed larger user populations, releases became more frequent, and system failures commonly meant that no orders could be taken at all. Today with E-commerce applications, releases can occur two or three times per month. Now, the user base is a large but unknown number of customers, not employees. System failures are highly visible and can cause customers to run to the competition.
These trends have several serious implications for project managers. The high cost of failure means that deploying untested software is simply not an option. Additionally, every aspect of quality needs validation, not just one or two. Accelerated release cycles drive the need for automated testing in which tests are easy to create, maintain and reuse. There is just too much to do in too little time to rely on manual methods.
A continuous approach to quality, initiated early in the software lifecycle, can lower the cost of completing and maintaining the software significantly. This greatly reduces the risk associated with deploying poor quality software.
Software functional testing best practices include questions such as:
How could this software break?
In what possible situations could this software fail to work predictably?
Many software testing strategies challenge the assumptions, risks, and uncertainty inherent in the work of other disciplines, and addresses those concerns using concrete demonstration and impartial evaluation.
Software testing focuses primarily on evaluating or assessing product quality, which is realized through the following core practices:
Find and document defects in software quality
Advise on the perceived software quality
Validate and prove the assumptions made in design and requirement specifications through concrete demonstration
Validate that the software product works as designed
Validate that the requirements are implemented appropriately.
The most effective way to reduce risk is to start testing early in the development cycle and to test iteratively, with every build. With this approach, defects are removed as the features are implemented. The testing of the application is completed shortly after the final features are coded, and as a result the product is ready for release much earlier.
Additionally, the knowledge of what features are completed (i.e. both coded and tested) affords management greater control over the entire process and promotes effective execution of the business strategy. Testing with every iteration may require some additional upfront planning between developers and testers, and a more earnest effort to design for testability; but these are both inherently positive undertakings, and the rewards are substantial.
There are several key advantages gained by testing early and with every build to close the quality gap quickly:
Risk is identified and reduced in the primary stages of development instead of in the closing stages.
Repairs to problems are less costly
The release date can be more accurately predicted throughout the project
Results will be given by the way of requirement
The product can be shipped sooner
The business strategy can be executed more effectively
Artifacts can be reused for regression testing
Not bound to any particular vendor
The key measures of a test include Coverage and Quality. Test Coverage is the measurement of testing completeness. It is based on the coverage of testing expressed by the coverage of test requirements and test cases or by the coverage of executed code. Test coverage includes requirements based coverage and code based coverage. Quality is a measure of the reliability, stability, and performance of the target-of-test (system or application-under-test). Quality is based on evaluating test results and analyzing change requests (defects) identified during testing.
Testing is applied to different types of targets, in different stages or levels of work effort. These levels are distinguished typically by those roles that are best skilled to design and conduct the tests, and where techniques are most appropriate for testing at each level. It’s important to ensure a balance of focus is retained across these different work efforts.
Developer testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake. In most cases, test execution initially occurs with the developer-testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution.
Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution.
The other levels include:
Independent Stakeholder Testing – testing that is based on the needs and concerns of various stakeholders
Unit testing – Unit testing focuses on verifying the smallest testable elements of the software.
Integration testing – to ensure that the components in the implementation model operate properly when combined to execute a use case.
System Testing – Usually the target is the system’s end- to-end functioning elements
Acceptance Testing – to verify that the software is ready, and that it can be used by end users to perform those functions and tasks for which the software was built.
Not all organizations have the expertise or resources to carry out the software testing process. Software testing is essential, but it is definitely not the core activity of most organizations that require it. Outsourcing will enable a company to concentrate on it’s core activities while software testing experts can handle the work efficiently, ensuring quality results. The company will save time and money on a process that would otherwise be too tedious and exhausting if performed in house.
Outsource your software testing to Stylus Inc. Our team of expert software testing professionals has worked on more than 250 client projects over the past 7 years.