Thursday, June 12, 2008

ABC’s of Software Testing

Software testing:

Software testing is the process used to assess the quality of computer software. Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing.

Over its existence, computer software has continued to grow in complexity and size. Every software product has a target audience. For example, video game software has its audience completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it presumably must assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.

Scope:

Software testing may be viewed as an important part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner. Although there are close links with SQA testing departments often exist independently, and there may be no SQA areas in some companies.


The software faults occur through the following process. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.

A problem with software testing is that testing all combinations of inputs and preconditions is not feasible when testing anything other than a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, para-functional dimensions of quality--for example, usability, scalability, performance, compatibility, reliability--can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another. There are many approaches to software testing. Reviews, walkthroughs or inspections are considered as static testing, whereas actually running the program with a given set of test cases in a given development stage is referred to as dynamic testing. Software testing is used in association with verification and validation:

Verification: Have we built the software right? (i.e., does it match the specification?)

Validation:
Have we built the right software? (i.e., is this what the customer wants?)

Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing there have been established different roles: test lead/manager, test designer, tester, test automater/automation developer, and test administrator.

History:


The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979. Although his attention was on breakage testing, it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dr. Dave Gelperin and Dr. William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:

Until 1956 - Debugging oriented
1957-1978 - Demonstration oriented

1979-1982 - Destruction oriented
1983-1987 - Evaluation oriented

1988-2000 - Prevention oriented

Testing methods: Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

Black box testing treats the software as a black-box without any understanding of internal behavior. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix etc.

White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. White box testing methods include creating tests to satisfy some code coverage criteria. For example, the test designer can create tests to cause all statements in the program to be executed at least once. Other examples of white box testing are mutation testing and fault injection methods. White box testing includes all static testing.

White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Two common forms of code coverage are function coverage, which reports on functions executed and statement coverage, which reports on the number of lines executed to complete the test. They both return coverage metric, measured as a percentage.

In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. Grey box testing may also include reverse engineering to determine, for instance, boundary values.


Special methods exist to test non-functional aspects of software. Performance testing checks to see if the software can handle large quantities of data or users. Usability testing is needed to check if the user interface is easy to use and understand. Security testing is essential for software which processes confidential data and to prevent system intrusion by hackers. To test internationalization and localization aspects of software a pseudo localization method can be used.

Testing process:

A common practice of software testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).

Testing can be done on the following levels:


Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.

Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.

System testing tests a completely integrated system to verify that it meets its requirements.
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.

Before shipping the final version of software, alpha and beta testing are often done additionally:

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.


Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.


Finally, acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.

Regression testing:

After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality.

Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.
More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behavior, and smoke testing when testing for basic functionality.

All Functional Automation Testing tools are Regression tools.


Finding faults early:

It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in requirements is found only post-release, then it would cost 10-100 times more to fix it comparing to the cost if the same fault was already found by the requirements review.

Time Introduced

Time Detected

Requirements

Architecture

Construction

System Test

Post-Release

Requirements

1

3

10-May

10

10-100

Architecture

-

1

10

15

25-100

Construction

-

-

1

10

25-Oct



Measuring software testing:

Usually, quality is constrained to such topics as correctness, completeness, security but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing.


Testing artifacts:

Software testing process can produce several artifacts. A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository.

In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both. The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases.

It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers.

This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Sample testing cycle:

Although varies between organizations, there is a cycle to testing:

Requirements analysis:

Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.


Test planning:
Test strategy, test plan, test-bed creation. A lot of activities will be carried out during testing, so that a plan is needed.


Test development:
Test procedures, test scenarios, test cases, test scripts to use in testing software.


Test execution:
Testers execute the software based on the plans and tests and report any errors found to the development team.


Test reporting:
Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.


Retesting the defects. Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user. There are yet other defects that may be rejected by the development team (of course, with due reason) if they deem it.


What constitutes responsible software testing? - Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.
Agile vs. traditional - Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.

Exploratory vs. scripted
- Should tests be designed at the same time as they are executed or should they be designed beforehand?


Manual vs. automated
- Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests.


Software design vs. software implementation
- Should testing be carried out only at the end or throughout the whole process?


Who watches the watchmen? - The idea is that any form of observation is also an interaction that the act of testing can also affect that which is being tested.

Certification:


Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge.

This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.

Certifications can be grouped into: exam-based and education-based. Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI. Education-based certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).


Testing certifications:

1. Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)

2. Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing

3. CSTP (TM) (Australian Version) offered by K. J. Ross & Associates

4. CATe offered by the International Institute for Software Testing

5. ISEB offered by the Information Systems Examinations Board

6. Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board

7. Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board

8. CBTS offered by the Brazilian Certification of Software Testing (ALATS)
Quality Assurance certifications
9. CSQE offered by the American Society for Quality (ASQ)

10. CSQA offered by the Quality Assurance Institute (QAI)

Wednesday, June 11, 2008

Some...Software Testing Definitions

Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.

Test: A set of one or more test cases

Test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

Test bed: See test environment.

Test case: A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

Test case suite: See test suite.

Test charter: A statement of test objectives, and possibly test ideas on how to test. Test charters are for example often used in exploratory testing.

Test closure: During the test closure phase of a test process, data is collected from completed activities to consolidate experience, testware, facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test process, including preparation of a test evaluation report. See also test process.

Test comparator: A test tool to perform automated test comparison.

Test comparison: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution (post-execution comparison).

Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

Test control: A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management.

Test cycle: Execution of the test process against a single identifiable release of the test object.

Test data: Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

Test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

Test design: See test design specification.

Test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

Test design technique: A procedure used to derive and/or select test cases.

Test design tool: A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, from specified test conditions held in the tool itself, or from code.

Test driven development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test evaluation report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

Test execution: The process of running a test on the component or system under test, producing actual result(s).

Test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

Test execution phase: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied.

Test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

Test execution technique: The method used to perform the actual test execution, either manually or automated.

Test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.

Test generator: See test data preparation tool.

Test harness: A test environment comprised of stubs and drivers needed to execute a test.

Test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

Test input: The data received from an external source by the test object during test execution. The external source can be hardware, software or human.

Test item: The individual element to be tested. There usually is one test object and many test items. See also test object.

Test leader: See test manager.

Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.

Test log: A chronological record of relevant details about the execution of tests.

Test logging: The process of recording information about tests executed into a test log.

Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test management tool: A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.

Test manager: The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.

Test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management.

Test object: The component or system to be tested. See also test item.

Test objective: A reason or purpose for designing and executing a test.

Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be a requirements specification, the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.

Test performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive test development, e.g. Defect Detection Percentage (DDP).

Test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level.

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test planning: The activity of establishing or updating a test plan.

Test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test Point Analysis (TPA): A formula based test estimation method based on function point analysis.

Test procedure: See test procedure specification.

Test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.

Test process: The fundamental test process comprises planning, specification, execution, recording, checking for completion and test closure activities.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

Test record: See test log.

Test recording: See test logging.

Test reproduceability: An attribute of a test indicating whether the same results are produced each time the test is executed.

Test report: See test summary report.

Test requirement: See test condition.

Test run: Execution of a test on a specific version of the test object.

Test run log: See test log.

Test scenario: See test procedure specification.

Test script: Commonly used to refer to a test procedure specification, especially an automated one.

Test set: See test suite.

Test situation: See test condition.

Test specification: A document that consists of a test design specification, test case specification and/or test procedure specification.

Test specification technique: See test design technique.

Test stage: See test level.

Test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

Test target: A set of exit criteria.

Test technique: See test design technique.

Test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis.

Test type: A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases.

Testability: The capability of the software product to enable modified software to be tested.

Testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process.

Testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.

Tester: A skilled professional who is involved in the testing of a component or system.

Testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

Thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests.