UCB
GESTIÓN DE CALIDAD DE SISTEMAS
Chapter 2- Testing Throughout the Software Development Lifecycle
START
by Katty Ode
INDEX
3. Test Types
1. Software Development Lifecycle Models
2. Test Levels
4. Maintenance Testing
Chapter 02
Testing throughout the software life cycle
Testing throughout the software life cycle
Objetive: Understand the relationship between development, test activities and work products in the development life cycle, learn levels of testing
Software Development Models
- Testing is not a stand-alone activity.
- It has its place within a software development life cycle model and therefore the lifecycle applied will largely determine how testing is organized.
- There are numerous development life cycles that have been developed in order to achieve different required objectives.
In every development life cycle, a part of testing is focused on verification testing and a part is focused on validation testing
Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements: is the deliverable built according the specification?
Validation is concerned with the evaluating a work product, component or system to determine whether it meets the user need and requirements: is the deliverable fit for purpose? Does it provide a solution to the problem?
VERIFICATION
VALIDATION
SOFTWARE DEVELOPMENT MODELS
V-MODEL
- 4 test levels
- Component testing
- Integration testing
- System testing
- Acceptance testing
ITERATIVE LIFE CYCLES
- Rapid Application Development
- Dynamic system Development methodology
- Agile development
2.1.1. V-MODEL
Before discussing V-model --- Waterfall model was one of the earliest models to be designed it has a natural timeline where tasks are executed in a sequential fashion, testing tends to happened at the end of the project life. The V-model was developed to address some of the problems experienced using the traditional waterfall approach. Defects were being found too late in the life cycle, as testing was not involved until the end of the project.
The V-model provides guidance that testing needs to begin as early as possible in the life cycle There are a variety of activities that need to be performed before the end of the coding phase. These activities should be carried out in parallel The V-model is a model that illustrates how testing activities (verification and validation) can be integrated into each phase of the life cycle
2.1.1. V-MODEL
2.1.1 V-MODEL
Although variants of the V-model exist,a common type of V-model uses four test levels.
2.1.1. V-MODEL
Acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system. System testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
2.1.1. V-MODEL
Integration testing: tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems Component testing: searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes etc.) that are separately testable;
2.1.1. V-MODEL
Acceptance testing
User
Component testing
System testing
Integration testing
Not all life cycles are sequential. There are also iterative or incremental life cycles where, instead of one large development timeline from beginning to end, we cycle through a number of smaller self-contained life cycle phases for the same project A common feature of iterative approaches is that the delivery is divided into increments or builds with each increment adding new functionality.
2.1.2. ITERATIVE LIFE CYCLES
The initial increment will contain the infrastructure required to support the initial build functionality.
Subsequent increments will need testing for the new functionality, regression testing of the existing functionality, and integration testing of both new and existing parts.
2.1.2. ITERATIVE LIFE CYCLES
•Regression testing is increasingly important on all iterations after the first one. This means that more testing will be required at each subsequent delivery phase which must be allowed for in the project plans.
• This life cycle can give early market presence with critical functionality, can be simpler to manage because the workload is divided into smaller pieces,
2.1.2. ITERATIVE LIFE CYCLES
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
- Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration.
- Components/functions are developed in parallel as if they were mini projects the developments are time-boxed, delivered, and then assembled into a working prototype
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
- This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements.
- Rapid change and development of the product is possible using this methodology.
- This methodology allows early validation of technology risks and a rapid response to changing customer requirements
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
- Dynamic System Development Methodology [DSDM] is a refined RAD process that allows controls to be put in place in order to stop the process from getting out of control.
- From the testing perspective we need to plan this very carefully and update our plans regularly as things will be changing very rapidly
2.1.2. ITERATIVE LIFE CYCLES
agile development
Extreme Programming (XP) is currently one of the most well-known agile development life cycle models. The methodology claims to be more human friendly than traditional development methods.
It promotes the generation of business stories to define the functionality.
It demands an on-site customer for continual feedback and to define and carry out functional acceptance testing.
It promotes pair programming and shared code ownership amongst the developers.
It states that component test scripts shall be written before the code is written and that those tests should be automated.
It states that integration and testing of the code shall happen several times a day.
It states that we always implement the simplest solution to meet today’s problems.
extreme programming (xp)
Some characteristics of XP are:
2.1.2. ITERATIVE LIFE CYCLES
agile development
Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the
feature increments are correspondingly small, such as a few enhancements and/or two or three
new features Kanban: Implemented with or without fixed-length iterations, which can deliver either a single
enhancement or feature upon completion, or can group features together to release at once
2.1.3 Testing within a life cycle model
In summary, whichever life cycle model is being used, there are several characteristics of good testing:
- for every development activity there is a corresponding testing activity;
- each test level has test objectives specific to that level;
- the analysis and design of tests for a given test level should begin during the corresponding development activity;
- testers should be involved in reviewing documents as soon as drafts are available in the development cycle.
Question 1 of 4
Which one of the following is the BEST definition of an incremental development model?
a) Defining requirements, designing software and testing are done in phases where in each
phase a piece of the system is added
b) A phase in the development process should begin when the previous phase is complete
c) Testing is viewed as a separate phase which takes place after development has been
completed
d) Testing is added to development as an increment
Select ONE option.
Rightanswer
This page is password protected
Enter the password
Question 3 of 4
What are good practices for testing within the development life cycle?
a) Early test analysis and design.
b) Different test levels are defined with specific objectives.
c) Testers will start to get involved as soon as coding is done.
d) A and B above.
Rightanswer
Question 4 of 4
Which option best describes objectives for test levels with a life cycle model?
a) Objectives should be generic for any test level.
b) Objectives are the same for each test level.
c) The objectives of a test level don’t need to be defined in advance.
d) Each level has objectives specific to that level.
Rightanswer
end day 4
2.2 TEST LEVELS
Test levels are groups of test activities that are organized and managed together. Each test level is an
instance of the test process, performed in relation to software at a given level of development, from individual units or components to complete systems. For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment.
test levels
Acceptance testing
User
Component testing
System testing
Integration testing
2.2 TESTING LEVELS
Also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.Component testing is often done in isolation from the rest of the system depending on the context of the development life cycle and the system.
COMPONENT TESTING
2.2 TESTING LEVELS
Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test frame-work or debugging tool, and in practice usually involves the programmer who wrote the code. Sometimes, depending on the applicable level of risk, component testing is carried out by a different programmer thereby introducing independence. Defects are typically fixed as soon as they are found, without formally recording the incidents found.
COMPONENT TESTING
Component testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the component are as designed and specified
- Building confidence in the component’s quality
- Finding defects in the component
- Preventing defects from escaping to higher test levels
In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components.
OBJECTIVES
Component testing
TEST BASIS
Examples of work products that can be used as a test basis for component testing include:
- Detailed design
- Code
- Data model
- Component specifications
Component testing
TEST OBJECTS
Typical test objects for component testing include:
- Components, units or modules
- Code and data structures
- Classes
- Database modules
Component testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for component testing include:
- Incorrect functionality (e.g., not as described in design specifications)
- Data flow problems
- Incorrect code and logic
2.2 TESTING LEVELS
INTEGRATIONTESTING
Tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems. There are two different levels of integration testing which may be carried out on
test objects of varying size"
- Component integration testing
- System integration testing
Component integration testing focuses on the interactions and interfaces between integrated
components. Component integration testing is performed after component testing, and is
generally automated. In iterative and incremental development, component integration tests are
usually part of the continuous integration process. System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and
interfaces provided by, external organizations (e.g., web services). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development)
2.2 TESTING LEVELS
INTEGRATIONTESTING
2.2 TESTING LEVELS
Component integration tests and system integration tests should concentrate on the integration itself. Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.
INTEGRATIONTESTING
2.2 TESTING LEVELS
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk. This leads to varying approaches to integration testing. One extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole. This is called ‘big-bang’ integration testing. Big-bang testing has the advantage that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts. The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration.
INTEGRATIONTESTING
Another extreme is that all programs are integrated one by one, and a test is carried out after each step (incremental testing). Between these two extremes, there is a range of variants. The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. A disadvantage is that it can be time-consuming
- Top-down: testing takes place from top to bottom, following the control flow or architectural structure
- Bottom-up: testing takes place from the bottom of the control flow upwards.
- Functional incremental: integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification.
2.2 TESTING LEVELS
INTEGRATIONTESTING
INTEGRATION testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the interfaces are as designed
and specified
- Building confidence in the quality of the interfaces
- Finding defects (which may be in the interfaces themselves or within the components or systems)
- Preventing defects from escaping to higher test levels
As with component testing, in some cases integration regression tests provide confidence that
changes have not broken existing interfaces, components, or systems.
OBJECTIVES
INTEGRATION testing
TEST BASIS
Examples of work products that can be used as a test basis for integration testing include:
- Software and system design
- Sequence diagrams
- Interface and communication protocol specifications
- Use cases
- Architecture at component or system level
- Workflows
- External interface definitions
INTEGRATION testing
TEST OBJECTS
Typical test objects for integration testing include:
- Subsystems
- Databases
- Infrastructure
- Interfaces
- APIs
- Microservices
INTEGRATION testing
TYPICAL DEFECTS AND FAILURES
Component integration testing examples:
- Incorrect data, missing data, or incorrect data encoding
- Incorrect sequencing or timing of interface calls
- Interface mismatch
- Failures in communication between components
- Unhandled or improperly handled communication failures between components
- Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components
INTEGRATION testing
TYPICAL DEFECTS AND FAILURES
System integration testing examples:
- Inconsistent message structures between systems
- Incorrect data, missing data, or incorrect data encoding
- Interface mismatch
- Failures in communication between systems
- Unhandled or improperly handled communication failures between systems
- Incorrect assumptions about the meaning, units, or boundaries of the data being passed between
- systems
- Failure to comply with mandatory security regulations
2.2 TESTING LEVELS
System testing focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks System testing is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose may be to find as many defects as possible
SYSTEMTESTING
system testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the system are as designed and
specified
- Validating that the system is complete and will work as expected
- Building confidence in the quality of the system as a whole
- Finding defects
- Preventing defects from escaping to higher test levels or production
For certain systems, verifying data quality may also be an objective. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.
OBJECTIVES
SYSTEM testing
TEST BASIS
Examples of work products that can be used as a test basis for system testing include:
- System and software requirement specifications (functional and non-functional)
- Risk analysis reports
- Use cases
- Epics and user stories
- Models of system behavior
- State diagrams
- System and user manuals
SYSTEM testing
TEST OBJECTS
Typical test objects for system testing include:
- Applications
- Hardware/software systems
- Operating systems
- System under test (SUT)
- System configuration and configuration data
SYSTEM testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for system testing include:
- Incorrect calculations
- Incorrect or unexpected system functional or non-functional behavior
- Incorrect control and/or data flows within the system
- Failure to properly and completely carry out end-to-end functional tasks
- Failure of the system to work properly in the system environment(s)
- Failure of the system to work as described in system and user manuals
2.2 TESTING LEVELS
When the development organization has performed its system test and has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing. The acceptance test should answer questions such as: ‘Can the system be released?’ Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk.
ACCEPTANCETESTING
2.2 TESTING LEVELS
Common forms of acceptance testing include the following:
- User acceptance testing
- Operational acceptance testing
- Contractual and regulatory acceptance testing
- Alpha and beta testing.
ACCEPTANCETESTING
ACCEPTANCE TESTING
User acceptance testing of the system is typically focused on validating the fitness for use of the system
by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.
User acceptance testing (UAT)
ACCEPTANCE TESTING
Operational acceptance testing (OAT)
The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:
- Testing of backup and restore
- Installing, uninstalling and upgrading
- Disaster recovery
- User management
- Maintenance tasks
- Data load and migration tasks
- Checks for security vulnerabilities
- Performance testing
The main objective of operational acceptance testing is building confidence that the operators or system
administrators can keep the system working properly for the users in the operational environment, even
under exceptional or difficult conditions
ACCEPTANCE TESTING
- Contractual acceptance testing is performed against a contract’s acceptance criteria for producing
custom-developed software. Acceptance criteria should be defined when the parties agree to the contract.
- Contractual acceptance testing is often performed by users or by independent testers.
- Regulatory acceptance testing is performed against any regulations that must be adhered to, such as
government, legal, or safety regulations.
- Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.
Contractual and regulatory acceptance testing
ACCEPTANCE TESTING
- If the system has been developed for the mass market, e.g. commercial off-the-shelf software (COTS), then testing it for individual users or customers is not practical or even possible in some cases.
- Feedback is needed from potential or existing users in their market before the software product is put out for sale commercially. Very often this type of system undergoes two stages of acceptance test.
- The first is called alpha testing. This test takes place at the developer’s site. A cross-section of potential users and members of the developer’s organization are invited to use the system. Developers observe the users and note problems. Alpha testing may also be carried out by an independent test team.
Alpha and beta testing
ACCEPTANCE TESTING
- Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired
One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective maybe the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development team
Alpha and beta testing
aCCEPTANCE testing
- Establishing confidence in the quality of the system as a whole
- Validating that the system is complete and will work as expected
- Verifying that functional and non-functional behaviors of the system are as specified
Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user).
OBJECTIVES
Acceptance testing
TEST BASIS
Examples of work products that can be used as a test basis for acceptance testing include:
- Business processes
- User or business requirements
- Regulations, legal contracts and standards
- Use cases and/or user stories
- System requirements
- System or user documentation
- Installation procedures
- Risk analysis reports
Acceptance testing
TEST BASIS
In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the
following work products can be used
- Backup and restore procedures
- Disaster recovery procedures
- Non-functional requirements
- Operations documentation
- Deployment and installation instructions
- Performance targets
- Database packages
- Security standards or regulations
ACCEPTAnCE testing
TEST OBJECTS
Typical test objects for acceptance testing include:
- System under test
- System configuration and configuration data
- Business processes for a fully integrated system
- Recovery systems and hot sites (for business continuity and disaster recovery testing)
- Operational and maintenance processes
- Forms
- Reports
- Existing and converted production data
ACCEPTANCE testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for acceptance testing include:
- System workflows do not meet business or user requirements
- Business rules are not implemented correctly
- System does not satisfy contractual or regulatory requirements
- Non-functional failures such as security vulnerabilities, inadequate performance efficiency under
high loads, or improper operation on a supported platform
end day 5
TEST LEVELS
Acceptance testing
User
Component testing
System testing
Integration testing
Question 1 of 3
What type of testing is normally conducted to verify that a product meets a particular regulatory
requirement? a. Unit testing
b. Integration testing
c. System testing
d. Acceptance testing
Rightanswer
Question 2 of 3
Consider the following types of defects that a test level might focus on:
1. Defects in separately testable modules or objects
2. Not focused on identifying defects
3. Defects in interfaces and interactions
4. Defects in the whole test object
Which of the following list correctly matches test levels from with the defect focus options given above? a) 1 = performance test; 2 = component test; 3 = system test; 4 = acceptance test
b) 1 = component test; 2 = acceptance test; 3 = system test; 4 = integration test
c) 1 = component test; 2 = acceptance test; 3 = integration test; 4 = system test
d) 1 = integration test; 2 = system test; 3 = component test; 4 = acceptance test
Select ONE option.
Rightanswer
Question 3 of 3
Given that the testing being performed has the following attributes:
• Based on interface specifications
• Focused on finding failures in communication
• Use incremental testing Which of the following test levels is MOST likely being performed?
a) Integration testing
b) Acceptance testing
c) System testing
d) Component testing
Rightanswer
2.3 TEST TYPES
A test type is focused on a particular test objective.
- Evaluating functional quality characteristics, such as completeness, correctness, and appropriateness
- Evaluating non-functional quality characteristics, such as reliability, performance efficiency, security, compatibility, and usability
2.3 TEST TYPES
A test type is focused on a particular test objective.
- Evaluating whether the structure or architecture of the component or system is correct, complete, and as specified
- Evaluating the effects of changes, such as confirming that defects have been fixed (confirmation testing) and looking for unintended changes in behavior resulting from software or environment changes (regression testing)
2.3 Test TYPES
- Functional testing of a system involves tests that evaluate functions that the system should perform.
- The functions are “what” the system should do.
- Function (or functionality) testing can be done focusing on suitability, interoperability, security, accuracy and compliance
- Suitability: Adecuación
- Interoperability: Interoperabilidad
- Security: Seguridad
- Accuracy: Precisión
- Compliance: Cumplimiento
FUNCTIONAL TESTING
2.3 Test TYPES
- Functional requirements may be described in work products such as business requirements specifications, epics, user stories, use cases, or functional specifications.
- Functional tests should be performed at all test levels
FUNCTIONAL TESTING
2.3 Test TYPES
- Requirements-based testing uses a specification of the functional requirements for the system as the basis for designing tests.
- We should also prioritize the requirements based on risk criteria (if this is not already done in the specification) and use this to prioritize the tests. This will ensure that the most important and most critical tests are included in the testing effort.
FUNCTIONAL TESTING - examples
2.3 Test TYPES
- Business-process-based testing uses knowledge of the business processes. Business processes describe the scenarios involved in the day-to-day business use of the system
- They also take the business processes as a starting point, although they start from tasks to be performed by users. Use cases are a very useful basis for test cases from a business perspective.
FUNCTIONAL TESTING
2.3 Test TYPES
- Functional test design and execution may involve special skills or knowledge, such as knowledge of the
particular business problem the software solves
FUNCTIONAL TESTING
2.3 Test TYPES
- A second target for testing is the testing of the quality characteristics, or non-functional attributes of the system (or component or integration group). Here we are interested in how well or how fast something is done.
- Non-functional testing, as functional testing, should be performed at all test levels, and done as early as possible. The late discovery of non-functional defects can be extremely dangerous to the success of a project
NON-FUNCTIONAL TESTING
2.3 Test TYPES
- Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.
NON-FUNCTIONAL TESTING
2.3 Test TYPES
- Testing of software structure/architecture (structural testing), Structural testing is often referred to as ‘white-box’ or ‘glass-box’ because we are interested in what is happening ‘inside the box’.
- White-box testing derives tests based on the system’s internal structure or implementation. Internal
structure may include code, architecture, work flows, and/or data flows within the system
WHITE-BOX TESTING
2.3 Test TYPES
- At the component testing level, code coverage is based on the percentage of component code that has
been tested, and may be measured in terms of different aspects of code such as the percentage of executable statements tested in the component, or the percentage of decision outcomes tested
- At the component integration
testing level, white-box testing may be based on the architecture of the system, such as interfaces between components.
- White-box test design and execution may involve special skills or knowledge, such as the way the code is
built, how data is stored and how to use coverage tools and
to correctly interpret their results.
WHITE-BOX TESTING
2.3 Test TYPES
When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences.
- Confirmation testing
- Regression testing
CHANGE-RELATED TESTING
2.3 Test TYPES
Confirmation testing (re-testing)
When a test fails and we determine that the cause of the failure is a software defect, the defect is reported, and we can expect a new version of the software that has had the defect fixed. In this case we will need to execute the test again to confirm that the defect has indeed been fixed. This is known as confirmation testing (also known as re-testing).
CHANGE-RELATED TESTING
2.3 Test TYPES
When doing confirmation testing, it is important to ensure that the test is executed in exactly the same way as it was the first time, using the same inputs, data and environment. If the test now passes does this mean that the software is now correct? Well, we now know that at least one part of the software is correct – where the defect was. But this is not enough. The fix may have introduced or uncovered a different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to do regression testing.
CHANGE-RELATED TESTING
2.3 Test TYPES
Regression testing
It is possible that a change made in one part of the code, whether a fix or
another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system
or database management system. Such unintended side-effects are called regressions.
Regression testing involves running tests to detect such unintended side-effects.
CHANGE-RELATED TESTING
2.3 Test TYPES
- Confirmation testing and regression testing are performed at all test levels
- It is common for organizations to have what is usually called a regression test suite or regression test pack. This is a set of test cases that is specifically used for regression testing. They are designed to collectively exercise most functions (certainly the most important ones) in a system but not test any one in detail. It is appropriate to have a regression test suite at every level of testing (component testing, integration testing, system testing, etc.). All of the test cases in a regression test suite would be executed every time a new version of software is produced and this makes them ideal candidates for automation.
CHANGE-RELATED TESTING
2.3.5- TEST TYPES AND TEST LEVELS
FUNCIONAL TESTS:
For component testing, tests are designed based on how a component should calculate compound interest. For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic. For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.
It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
FUNCIONAL TESTS:
For system integration testing, tests are designed based on how the system uses an external
microservice to check an account holder’s credit score. For acceptance testing, tests are designed based on how the banker handles approving or
declining a credit application
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
NON-FUNCIONAL TESTS:
For component testing, performance tests are designed to evaluate the number of CPU cycles
required to perform a complex total interest calculation. For component integration testing, security tests are designed for buffer overflow vulnerabilities
due to data passed from the user interface to the business logic. For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
NON- FUNCIONAL TESTS:
For system integration testing, reliability tests are designed to evaluate system robustness if
the credit score microservice fails to respond. For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s
credit processing interface for people with disabilities.
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
WHITE- BOX TESTS:
For component testing, tests are designed to achieve complete statement and decision coverage
for all components that perform financial calculations. For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic. For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
WHITE-BOX TESTS:
For system integration testing, tests are designed to exercise all possible consulting types sent to
the credit score microservice. For acceptance testing, tests are designed to cover all supported financial data file structures and
value ranges for bank-to-bank transfers
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
CHANGE- RELATED TESTS:
For component testing, automated regression tests are built for each component and included
within the continuous integration framework. For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository. For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
CHANGE- RELATED TESTS:
For system integration testing, tests of the application interacting with the credit scoring
microservice are re-executed daily as part of continuous deployment of that microservice. For acceptance testing, all previously-failed tests are re-executed after a defect found in acceptance testing is fixed.
To illustrate, examples of
functional, non-functional, white-box, and change-related tests will be given across all test levels, for a
Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2 . 4 MAINTENANCE TESTING
Once deployed, a system is often in service for years or even decades. During this time the system and its operational environment is often corrected, changed or extended. Testing that is executed during this life cycle phase is called ‘maintenance testing’. Maintenance
is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, and portability.
2 . 4 MAINTENANCE TESTING
When any changes are made as part of maintenance, maintenance testing should be performed, both to evaluate the success with which the changes were made and to check for possible side-effects (e.g.,regressions) in parts of the system that remain unchanged (which is usually most of the system).Maintenance can involve planned releases and unplanned releases (hot fixes)
2 . 4 MAINTENANCE TESTING
A maintenance release may require maintenance testing at multiple test levels, using various test types,based on its scope. The scope of maintenance testing depends on:
- The degree of risk of the change, for example, the degree to which the changed area of software communicates with other components or systems
- The size of the existing system
- The size of the change
2.4.1 Triggers for Maintenance
There are several reasons why software maintenance, and thus maintenance testing, takes place, both
for planned and unplanned changes
- Modification, such as planned enhancements (e.g., release-based), corrective and emergency
changes, changes of the operational environment (such as planned operating system or database upgrades), upgrades of software, and patches for defects and vulnerabilities
2.4.1 Triggers for Maintenance
- Planned Modifications
- Perfective modifications (adapting software to the user’s wishes, for instance by supplying new functions or enhancing performance);
- Adaptive modifications (adapting software to environmental changes such as new hardware, new systems software or new legislation);
- Corrective planned modifications (deferrable correction of defects).
2.4.1 Triggers for Maintenance
- Ad-hoc corrective modifications
- Ad-hoc corrective modifications are concerned with defects requiring an immediate solution
Ad-hoc testing
Hotfix is an ad-hoc way to release high priority fixes
2.4.1 Triggers for Maintenance
Migration, such as from one platform to another, which can require operational tests of the new
environment as well as of the changed software, or tests of data conversion when data from
another application will be migrated into the system being maintained.
- Retirement, such as when an application reaches the end of its life. When an application or system is retired, this can require testing of data migration or archiving if long data retention periods are required.
- Testing restore/retrieve procedures after archiving for long retention periods may also be needed.
- Regression testing may be needed to ensure that any functionality that remains in service still works.
2.4.2 Impact Analysis for Maintenance
- Impact analysis evaluates the changes that were made for a maintenance release to identify the intended
consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change.
- Impact analysis can also help to identify the impact of a change on existing tests.
- Impact analysis may be done before a change is made, to help decide if the change should be made,
based on the potential consequences in other areas of the system.
2.4.2 Impact Analysis for Maintenance
Impact analysis can be difficult if:
- Specifications (e.g., business requirements, user stories, architecture) are out of date or missing
- Test cases are not documented or are out of date
- Bi-directional traceability between tests and the test basis has not been maintained
- Tool support is weak or non-existent
- The people involved do not have domain and/or system knowledge
- Insufficient attention has been paid to the software's maintainability during development
TEST LEVELS
Acceptance testing
User
Component testing
System testing
Integration testing
Chapter 2 "Te lo Resumo así nomás"
In this chapter we reviewed:
- Software Development Lifecycle Models
- Test Levels
- Test types
Chapter 2 "Te lo Resumo así nomás"
Software Development Lifecycle Models
- Testing is not a stand-alone activity. It has its place within a software development life cycle model
Verification - Validation
is the deliverable built according the specification?
is the deliverable fit for purpose? Does it provide a solution to the problem?
Chapter 2 "Te lo Resumo así nomás"
Software Development Lifecycle Models
- Models: V-Model, Iterative life cycles, Agile Development, XP
Chapter 2 "Te lo Resumo así nomás"
Test Levels
- Component Testing (Unit testing)
- Integration Testing
- System Testing
- Acceptance Testing
Component Integration Testing System Integration Testing
Can the system be realeased?
Chapter 2 "Te lo Resumo así nomás"
Test Types
- Evaluating functional quality characteristics, funciones principales
- Evaluating non-functional quality characteristics, such as reliability, performance, efficiency, security, compatibility, and usability
Chapter 2 "Te lo Resumo así nomás"
Test Types
- Functional Testing
- Non-Functional Testing
- White-Box Testing
- Change-Related Testing
what the system should do?
how well or how fast something is done.
tests based on the system’s internal structure
changes made to a system (confirmation, regression)
Chapter 2 "Te lo Resumo así nomás"
Maintance testing...
- Changes/modifications
- Planned- unplanned
- Impact Analysis- before
Question 1
Which of the following statements CORRECTLY describes a role of impact analysis in
Maintenance Testing?
a) Impact analysis is used when deciding if a fix to a maintained system is worthwhile
b) Impact analysis is used to identify how data should be migrated into the maintained system
c) Impact analysis is used to decide which hot fixes are of most value to the user
d) Impact analysis is used to determine the effectiveness of new maintenance test cases
Select ONE option.
Rightanswer
ISTQB 2- Software life cycle
Katty Ode
Created on January 26, 2021
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Essential Learning Unit
View
Akihabara Learning Unit
View
Genial learning unit
View
History Learning Unit
View
Primary Unit Plan
View
Vibrant Learning Unit
View
Art learning unit
Explore all templates
Transcript
UCB
GESTIÓN DE CALIDAD DE SISTEMAS
Chapter 2- Testing Throughout the Software Development Lifecycle
START
by Katty Ode
INDEX
3. Test Types
1. Software Development Lifecycle Models
2. Test Levels
4. Maintenance Testing
Chapter 02
Testing throughout the software life cycle
Testing throughout the software life cycle
Objetive: Understand the relationship between development, test activities and work products in the development life cycle, learn levels of testing
Software Development Models
In every development life cycle, a part of testing is focused on verification testing and a part is focused on validation testing
Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements: is the deliverable built according the specification? Validation is concerned with the evaluating a work product, component or system to determine whether it meets the user need and requirements: is the deliverable fit for purpose? Does it provide a solution to the problem?
VERIFICATION
VALIDATION
SOFTWARE DEVELOPMENT MODELS
V-MODEL
ITERATIVE LIFE CYCLES
2.1.1. V-MODEL
Before discussing V-model --- Waterfall model was one of the earliest models to be designed it has a natural timeline where tasks are executed in a sequential fashion, testing tends to happened at the end of the project life. The V-model was developed to address some of the problems experienced using the traditional waterfall approach. Defects were being found too late in the life cycle, as testing was not involved until the end of the project.
The V-model provides guidance that testing needs to begin as early as possible in the life cycle There are a variety of activities that need to be performed before the end of the coding phase. These activities should be carried out in parallel The V-model is a model that illustrates how testing activities (verification and validation) can be integrated into each phase of the life cycle
2.1.1. V-MODEL
2.1.1 V-MODEL
Although variants of the V-model exist,a common type of V-model uses four test levels.
2.1.1. V-MODEL
Acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system. System testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
2.1.1. V-MODEL
Integration testing: tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems Component testing: searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes etc.) that are separately testable;
2.1.1. V-MODEL
Acceptance testing
User
Component testing
System testing
Integration testing
Not all life cycles are sequential. There are also iterative or incremental life cycles where, instead of one large development timeline from beginning to end, we cycle through a number of smaller self-contained life cycle phases for the same project A common feature of iterative approaches is that the delivery is divided into increments or builds with each increment adding new functionality.
2.1.2. ITERATIVE LIFE CYCLES
The initial increment will contain the infrastructure required to support the initial build functionality. Subsequent increments will need testing for the new functionality, regression testing of the existing functionality, and integration testing of both new and existing parts.
2.1.2. ITERATIVE LIFE CYCLES
•Regression testing is increasingly important on all iterations after the first one. This means that more testing will be required at each subsequent delivery phase which must be allowed for in the project plans. • This life cycle can give early market presence with critical functionality, can be simpler to manage because the workload is divided into smaller pieces,
2.1.2. ITERATIVE LIFE CYCLES
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
2.1.2. ITERATIVE LIFE CYCLES
Rapid Application Development
2.1.2. ITERATIVE LIFE CYCLES
agile development
Extreme Programming (XP) is currently one of the most well-known agile development life cycle models. The methodology claims to be more human friendly than traditional development methods.
It promotes the generation of business stories to define the functionality. It demands an on-site customer for continual feedback and to define and carry out functional acceptance testing. It promotes pair programming and shared code ownership amongst the developers. It states that component test scripts shall be written before the code is written and that those tests should be automated. It states that integration and testing of the code shall happen several times a day. It states that we always implement the simplest solution to meet today’s problems.
extreme programming (xp)
Some characteristics of XP are:
2.1.2. ITERATIVE LIFE CYCLES
agile development
Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the feature increments are correspondingly small, such as a few enhancements and/or two or three new features Kanban: Implemented with or without fixed-length iterations, which can deliver either a single enhancement or feature upon completion, or can group features together to release at once
2.1.3 Testing within a life cycle model
In summary, whichever life cycle model is being used, there are several characteristics of good testing:
Question 1 of 4
Which one of the following is the BEST definition of an incremental development model? a) Defining requirements, designing software and testing are done in phases where in each phase a piece of the system is added b) A phase in the development process should begin when the previous phase is complete c) Testing is viewed as a separate phase which takes place after development has been completed d) Testing is added to development as an increment Select ONE option.
Rightanswer
This page is password protected
Enter the password
Question 3 of 4
What are good practices for testing within the development life cycle? a) Early test analysis and design. b) Different test levels are defined with specific objectives. c) Testers will start to get involved as soon as coding is done. d) A and B above.
Rightanswer
Question 4 of 4
Which option best describes objectives for test levels with a life cycle model? a) Objectives should be generic for any test level. b) Objectives are the same for each test level. c) The objectives of a test level don’t need to be defined in advance. d) Each level has objectives specific to that level.
Rightanswer
end day 4
2.2 TEST LEVELS
Test levels are groups of test activities that are organized and managed together. Each test level is an instance of the test process, performed in relation to software at a given level of development, from individual units or components to complete systems. For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment.
test levels
Acceptance testing
User
Component testing
System testing
Integration testing
2.2 TESTING LEVELS
Also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.Component testing is often done in isolation from the rest of the system depending on the context of the development life cycle and the system.
COMPONENT TESTING
2.2 TESTING LEVELS
Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test frame-work or debugging tool, and in practice usually involves the programmer who wrote the code. Sometimes, depending on the applicable level of risk, component testing is carried out by a different programmer thereby introducing independence. Defects are typically fixed as soon as they are found, without formally recording the incidents found.
COMPONENT TESTING
Component testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the component are as designed and specified
- Building confidence in the component’s quality
- Finding defects in the component
- Preventing defects from escaping to higher test levels
In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components.OBJECTIVES
Component testing
TEST BASIS
Examples of work products that can be used as a test basis for component testing include:
Component testing
TEST OBJECTS
Typical test objects for component testing include:
Component testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for component testing include:
2.2 TESTING LEVELS
INTEGRATIONTESTING
Tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems. There are two different levels of integration testing which may be carried out on test objects of varying size"
Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process. System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and interfaces provided by, external organizations (e.g., web services). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development)
2.2 TESTING LEVELS
INTEGRATIONTESTING
2.2 TESTING LEVELS
Component integration tests and system integration tests should concentrate on the integration itself. Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.
INTEGRATIONTESTING
2.2 TESTING LEVELS
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk. This leads to varying approaches to integration testing. One extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole. This is called ‘big-bang’ integration testing. Big-bang testing has the advantage that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts. The major disadvantage is that in general it is time-consuming and difficult to trace the cause of failures with this late integration.
INTEGRATIONTESTING
Another extreme is that all programs are integrated one by one, and a test is carried out after each step (incremental testing). Between these two extremes, there is a range of variants. The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. A disadvantage is that it can be time-consuming
2.2 TESTING LEVELS
INTEGRATIONTESTING
INTEGRATION testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the interfaces are as designed
and specified
- Building confidence in the quality of the interfaces
- Finding defects (which may be in the interfaces themselves or within the components or systems)
- Preventing defects from escaping to higher test levels
As with component testing, in some cases integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.OBJECTIVES
INTEGRATION testing
TEST BASIS
Examples of work products that can be used as a test basis for integration testing include:
INTEGRATION testing
TEST OBJECTS
Typical test objects for integration testing include:
INTEGRATION testing
TYPICAL DEFECTS AND FAILURES
Component integration testing examples:
INTEGRATION testing
TYPICAL DEFECTS AND FAILURES
System integration testing examples:
2.2 TESTING LEVELS
System testing focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks System testing is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose may be to find as many defects as possible
SYSTEMTESTING
system testing
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the system are as designed and
specified
- Validating that the system is complete and will work as expected
- Building confidence in the quality of the system as a whole
- Finding defects
- Preventing defects from escaping to higher test levels or production
For certain systems, verifying data quality may also be an objective. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.OBJECTIVES
SYSTEM testing
TEST BASIS
Examples of work products that can be used as a test basis for system testing include:
SYSTEM testing
TEST OBJECTS
Typical test objects for system testing include:
SYSTEM testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for system testing include:
2.2 TESTING LEVELS
When the development organization has performed its system test and has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing. The acceptance test should answer questions such as: ‘Can the system be released?’ Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk.
ACCEPTANCETESTING
2.2 TESTING LEVELS
Common forms of acceptance testing include the following:
ACCEPTANCETESTING
ACCEPTANCE TESTING
User acceptance testing of the system is typically focused on validating the fitness for use of the system by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.
User acceptance testing (UAT)
ACCEPTANCE TESTING
Operational acceptance testing (OAT)
The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:
The main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions
ACCEPTANCE TESTING
Contractual and regulatory acceptance testing
ACCEPTANCE TESTING
Alpha and beta testing
ACCEPTANCE TESTING
- Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired
One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective maybe the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development teamAlpha and beta testing
aCCEPTANCE testing
- Establishing confidence in the quality of the system as a whole
- Validating that the system is complete and will work as expected
- Verifying that functional and non-functional behaviors of the system are as specified
Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user).OBJECTIVES
Acceptance testing
TEST BASIS
Examples of work products that can be used as a test basis for acceptance testing include:
Acceptance testing
TEST BASIS
In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the following work products can be used
ACCEPTAnCE testing
TEST OBJECTS
Typical test objects for acceptance testing include:
ACCEPTANCE testing
TYPICAL DEFECTS AND FAILURES
Examples of typical defects and failures for acceptance testing include:
end day 5
TEST LEVELS
Acceptance testing
User
Component testing
System testing
Integration testing
Question 1 of 3
What type of testing is normally conducted to verify that a product meets a particular regulatory requirement? a. Unit testing b. Integration testing c. System testing d. Acceptance testing
Rightanswer
Question 2 of 3
Consider the following types of defects that a test level might focus on: 1. Defects in separately testable modules or objects 2. Not focused on identifying defects 3. Defects in interfaces and interactions 4. Defects in the whole test object Which of the following list correctly matches test levels from with the defect focus options given above? a) 1 = performance test; 2 = component test; 3 = system test; 4 = acceptance test b) 1 = component test; 2 = acceptance test; 3 = system test; 4 = integration test c) 1 = component test; 2 = acceptance test; 3 = integration test; 4 = system test d) 1 = integration test; 2 = system test; 3 = component test; 4 = acceptance test Select ONE option.
Rightanswer
Question 3 of 3
Given that the testing being performed has the following attributes: • Based on interface specifications • Focused on finding failures in communication • Use incremental testing Which of the following test levels is MOST likely being performed? a) Integration testing b) Acceptance testing c) System testing d) Component testing
Rightanswer
2.3 TEST TYPES
A test type is focused on a particular test objective.
2.3 TEST TYPES
A test type is focused on a particular test objective.
2.3 Test TYPES
FUNCTIONAL TESTING
2.3 Test TYPES
FUNCTIONAL TESTING
2.3 Test TYPES
FUNCTIONAL TESTING - examples
2.3 Test TYPES
FUNCTIONAL TESTING
2.3 Test TYPES
FUNCTIONAL TESTING
2.3 Test TYPES
NON-FUNCTIONAL TESTING
2.3 Test TYPES
NON-FUNCTIONAL TESTING
2.3 Test TYPES
WHITE-BOX TESTING
2.3 Test TYPES
WHITE-BOX TESTING
2.3 Test TYPES
When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not caused any unforeseen adverse consequences.
CHANGE-RELATED TESTING
2.3 Test TYPES
Confirmation testing (re-testing) When a test fails and we determine that the cause of the failure is a software defect, the defect is reported, and we can expect a new version of the software that has had the defect fixed. In this case we will need to execute the test again to confirm that the defect has indeed been fixed. This is known as confirmation testing (also known as re-testing).
CHANGE-RELATED TESTING
2.3 Test TYPES
When doing confirmation testing, it is important to ensure that the test is executed in exactly the same way as it was the first time, using the same inputs, data and environment. If the test now passes does this mean that the software is now correct? Well, we now know that at least one part of the software is correct – where the defect was. But this is not enough. The fix may have introduced or uncovered a different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to do regression testing.
CHANGE-RELATED TESTING
2.3 Test TYPES
Regression testing It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions. Regression testing involves running tests to detect such unintended side-effects.
CHANGE-RELATED TESTING
2.3 Test TYPES
CHANGE-RELATED TESTING
2.3.5- TEST TYPES AND TEST LEVELS
FUNCIONAL TESTS:
For component testing, tests are designed based on how a component should calculate compound interest. For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic. For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.
It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
FUNCIONAL TESTS:
For system integration testing, tests are designed based on how the system uses an external microservice to check an account holder’s credit score. For acceptance testing, tests are designed based on how the banker handles approving or declining a credit application
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
NON-FUNCIONAL TESTS:
For component testing, performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation. For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic. For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
NON- FUNCIONAL TESTS:
For system integration testing, reliability tests are designed to evaluate system robustness if the credit score microservice fails to respond. For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s credit processing interface for people with disabilities.
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
WHITE- BOX TESTS:
For component testing, tests are designed to achieve complete statement and decision coverage for all components that perform financial calculations. For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic. For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
WHITE-BOX TESTS:
For system integration testing, tests are designed to exercise all possible consulting types sent to the credit score microservice. For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
CHANGE- RELATED TESTS:
For component testing, automated regression tests are built for each component and included within the continuous integration framework. For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository. For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2.3.5- TEST TYPES AND TEST LEVELS
CHANGE- RELATED TESTS:
For system integration testing, tests of the application interacting with the credit scoring microservice are re-executed daily as part of continuous deployment of that microservice. For acceptance testing, all previously-failed tests are re-executed after a defect found in acceptance testing is fixed.
To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a Banking application.
Potential consequences of earlier errors, intetional damage, defects and failures.
2 . 4 MAINTENANCE TESTING
Once deployed, a system is often in service for years or even decades. During this time the system and its operational environment is often corrected, changed or extended. Testing that is executed during this life cycle phase is called ‘maintenance testing’. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, and portability.
2 . 4 MAINTENANCE TESTING
When any changes are made as part of maintenance, maintenance testing should be performed, both to evaluate the success with which the changes were made and to check for possible side-effects (e.g.,regressions) in parts of the system that remain unchanged (which is usually most of the system).Maintenance can involve planned releases and unplanned releases (hot fixes)
2 . 4 MAINTENANCE TESTING
A maintenance release may require maintenance testing at multiple test levels, using various test types,based on its scope. The scope of maintenance testing depends on:
2.4.1 Triggers for Maintenance
There are several reasons why software maintenance, and thus maintenance testing, takes place, both for planned and unplanned changes
2.4.1 Triggers for Maintenance
2.4.1 Triggers for Maintenance
Ad-hoc testing
Hotfix is an ad-hoc way to release high priority fixes
2.4.1 Triggers for Maintenance
Migration, such as from one platform to another, which can require operational tests of the new environment as well as of the changed software, or tests of data conversion when data from another application will be migrated into the system being maintained.
2.4.2 Impact Analysis for Maintenance
2.4.2 Impact Analysis for Maintenance
Impact analysis can be difficult if:
TEST LEVELS
Acceptance testing
User
Component testing
System testing
Integration testing
Chapter 2 "Te lo Resumo así nomás"
In this chapter we reviewed:
Chapter 2 "Te lo Resumo así nomás"
Software Development Lifecycle Models
- Testing is not a stand-alone activity. It has its place within a software development life cycle model
Verification - Validationis the deliverable built according the specification?
is the deliverable fit for purpose? Does it provide a solution to the problem?
Chapter 2 "Te lo Resumo así nomás"
Software Development Lifecycle Models
Chapter 2 "Te lo Resumo así nomás"
Test Levels
Component Integration Testing System Integration Testing
Can the system be realeased?
Chapter 2 "Te lo Resumo así nomás"
Test Types
Chapter 2 "Te lo Resumo así nomás"
Test Types
what the system should do?
how well or how fast something is done.
tests based on the system’s internal structure
changes made to a system (confirmation, regression)
Chapter 2 "Te lo Resumo así nomás"
Maintance testing...
Question 1
Which of the following statements CORRECTLY describes a role of impact analysis in Maintenance Testing? a) Impact analysis is used when deciding if a fix to a maintained system is worthwhile b) Impact analysis is used to identify how data should be migrated into the maintained system c) Impact analysis is used to decide which hot fixes are of most value to the user d) Impact analysis is used to determine the effectiveness of new maintenance test cases Select ONE option.
Rightanswer