Planning¶
Defining Project Tasks¶
The selectable projects can be found on CooSpace (in the merged practice forum).
For the selected project, the team must precisely define the related tasks and how they plan to implement them. Within each of the two task types below, you must take on one task:
- Static testing or coverage measurement
- What is the goal, and what will be the expected outcome?
- Execution of functional, load, or usability tests
- What is the goal, and what will be the expected outcome?
When designating the tasks, you must also answer the following questions:
- What will be the exit criteria of the selected tasks? In other words, when do we consider the committed task to be completed?
- What will be the deliverables? Which products, code, documents, configuration files will be presented and handed over to the instructor at the end of the task?
- What effort is expected to be required to solve the task?
Exit criteria
For every testing activity, you must define when that activity is considered complete. This is especially important because exhaustive testing is not possible, yet objective evaluation requires knowing to what extent and quality we were able to perform the committed activity.
Projects completed to a high standard with extra content may be presented in the last lecture and may earn a proposed grade.
The decision is at the lecturer’s discretion
The practice instructors’ role is limited to nominating those teams—based on the possible project works—that they consider worthy of a proposed grade. The decision lies with the lecturer of the course, and it is also conditional on the concerned students regularly attending the lectures.
Detailed requirements for the project work can be read at this link.
Test Planning¶
The lecture discusses test plans in detail—their content and formal elements—while this material summarizes the most important points from a pragmatic perspective.
In agile frameworks, overly detailed documentation is no longer typical; concise, practical solutions are preferred. With this in mind, below we list the content elements (supplemented by key administrative components) that typically constitute parts of test plans even in agile frameworks.
-
Unique identifier: Anything that uniquely identifies the document. Typically includes the project name (the sponsor, if any), the year, and possibly a sequence number.
Why do we need a unique identifier?
You may have wondered why every document (not only those) needs a unique identifier. The short answer was already given by Antoine de Saint-Exupéry in The Little Prince: “...because grown-ups like numbers...”.
The longer answer is that various documents and test artifacts are produced during testing, and we need some way to know how each product links to others. Without these identifiers, primordial chaos would reign everywhere.
The image above is photograph of a painting of George Frederic Watts (1817–1904) with title Chaos. The image has been copied from Wikimedia.
-
Authors: List the names and positions of the authors.
-
Versioning: The test plan—like most working documents—undergoes multiple revisions during its lifecycle. Each revision is labeled with a version number that must be incremented with every change. Versioning is essentially a table at the beginning of the document showing the version numbers, the release dates, and the related changes.
-
Testing context: In this chapter we define the testing objective, describe the scope of testing, present the test basis on which requirements are determined, and specify which test conditions are included in or excluded from the scope.
What is the test basis?
The set of documents from which requirements for the components under test are derived. Simply put, based on these documents we know which software component to test, what the expectations are, the requirements, and the constraints.
What do we call test conditions?
Test conditions are any part, property, element, or event of a component, program, or software that can be verified using one or more test cases.
Test case and test condition are not the same!
Do not confuse test conditions with test cases. Test cases are defined starting from the test conditions and include preconditions, the activities to be executed, the expected results, and the postconditions.
Test conditions excluded from testing
Test conditions are divided into two sets: those to be tested and those excluded from testing. Conditions that the test team decides not to test must be explicitly presented in the test plan along with the rationale.
-
Assumptions and constraints: There are external factors whose presence is necessary for the team to carry out the activity, and conversely, whose presence may prevent certain activities from being performed. Plans should list these cases because this is how the team ultimately excludes responsibility for circumstances outside their control.
Assumptions and constraints
During the project we may assume that the software under test can be built and run, and that nothing blocks the required environment setup. In open-source projects this is not always the case. If compatibility issues arise that hinder task execution, you may switch projects during the practice; however, the change must be thoroughly justified.
Constraints may include legal and technical limitations. Examples include handling confidential materials, or—on the technical side—cases where, due to a component’s characteristics (e.g., dynamic solutions), unit testing is not feasible or only feasible with significant effort.
-
Stakeholders: In this section we list the roles and individuals involved in the testing project and their roles, functions, and responsibilities within the testing process. Stakeholders are not only testers; representatives of the business side are also included, who typically participate in defining requirements and accepting results (product owner).
-
Communication: Specify which channels will be used for information exchange during the project, who must respond to which inquiries and how quickly, and to whom issues must be escalated that the team cannot resolve independently. Communication also includes the artifacts provided by management tools, such as KANBAN/Scrum boards, burndown charts, CI/CD overview boards. Likewise, do not forget written communication channels (email, chat programs, formal reports); list these here as well.
-
Risk management: Risk is the possibility of danger or loss associated with an action. A project can suffer losses at many points; these must be identified and assessed during planning. In the assessment we enumerate events expected during the project along with their probability of occurrence and the impact if they occur.
Risk is composed of two scales (the probability of the undesirable event and the adverse impact—i.e., the magnitude of damage—if it occurs), which we can combine in a risk matrix. The joint classification on the two scales determines the risk level for a given event. In the matrix above, green means low, yellow medium, and red high risk level.
Before conducting the risk assessment, gather every event that—even with low probability—may occur during the project. The events must be divided into two categories:
- One category includes events affecting the product (e.g., software inoperability, problems with availability of required external tools, compatibility issues, etc.). This category is called product risk.
- The other category includes adverse events affecting the project itself (e.g., illness or departure of project members, deadline changes, etc.). This category is called project risk.
Product and project risks must be evaluated and documented separately in the project plan.
Risk matrix in the project task
If the number of risks in both categories is small, they can be represented in a single matrix in the project task, but the description must still specify the category. In real projects the number of risks in both categories is usually higher, so a separate analysis is always justified.
Risk analysis also includes plans for risk mitigation and the steps for damage reduction in case of events that have already occurred.
-
Test approach: In this section we describe the components of the plan concerning the specific task. This section should ideally be structured in sub-sections.
Test strategy
An organization’s test strategy defines the testing activities to be applied in testing processes, how and when they are to be implemented. Simply put, the test strategy determines which testing activities are carried out when, in which projects, and in what way.
Test approach
The test approach is the adaptation of the test strategy to the current project. It specifies which testing activities the team intends to perform, when, and how, given the project’s characteristics.
Test strategy vs. test approach
A test strategy is general guidance, usually part of organizational policies, while the test approach is the project-specific rule, tailored to the characteristics of the given testing project.
-
Test levels: In this subsection we specify the tests corresponding to the levels of the V-model, i.e., we answer the question of at what levels we will test the software and its components. According to the V-model these levels are: unit tests, integration or module tests, system test, acceptance test.
-
Grouping test types by test techniques: Tests can be grouped in different ways. One grouping is based on the applied test techniques, distinguishing black-box, white-box, experience-based, and collaboration-based techniques.
- Black-box testing is specification-based, i.e., test cases are derived from documents independent of the item under test. If such documents are not available, this is both a major risk and limitation. The goal of black-box testing is typically to verify whether the system works according to the specification.
- White-box testing is structure-based, where test cases are derived from the internal structure or implementation of the system. The goal is to achieve acceptable coverage of the system’s fundamental structures.
- Experience-based testing relies on the tester’s knowledge and experience when designing test cases. It includes exploratory techniques capable of finding defects not revealed by black-box or white-box methods, though heavily dependent on tester skills.
-
Collaboration-based techniques help prevent defects through teamwork, collaboration, and regular communication among team members.
Where does the efficiency of collaboration come from?
Teamwork means that the collective effort achieves greater efficiency and better results than individual contributions alone. Alignment and commitment to a shared goal increase the likelihood of project success. Teamwork is not just about efficiency—it fosters a culture of cohesion and organizational performance improvement. Key to teamwork are communication, cooperation, and mutual respect.
-
Grouping test types by function: This grouping focuses on the function and purpose of testing, independent of the technique-based classification.
- Functional testing checks whether the system performs the functions for which it was created. Goals include functional completeness, functional correctness (executing tasks properly), and functional appropriateness (performing the functions it was designed or commissioned for).
-
Non-functional testing checks the quality attributes of the system. The ISO/IEC 25010:2023 standard provides an overview of software quality attributes, shown in the figure below.
-
Static and dynamic testing: As the title suggests, this is another classification, independent of the two previous ones.
- Static testing does not require executing the software under test. Instead, code, specifications, design artifacts, or other work products are examined manually (e.g., reviews) or with tools (e.g., static analysis). Goals include improving quality, detecting defects, and evaluating attributes such as readability, completeness, correctness, testability, and consistency.
- Dynamic testing requires execution of the code.
Handling multiple groupings
These different categories may seem confusing, but remember: they are based on different perspectives. The same testing task can simultaneously be black-box, functional, and dynamic—for example, a system test.
-
Test techniques: Test techniques assist testers in analysis (what to test) and technical design (how to test). They support systematic creation of a relatively small yet sufficient set of test cases.
Test technique name Type Equivalence partitioning black-box Boundary value analysis (BVA) black-box Decision table testing black-box State transition testing black-box Statement testing and coverage white-box Branch testing and coverage white-box Error guessing experience-based Exploratory testing experience-based Checklist-based testing experience-based User story-based testing collaboration-based Acceptance test-driven development (ATDD) collaboration-based Use case-based testing
This technique derives test cases from use cases. In agile methodologies, use cases are often replaced by user stories and acceptance criteria, leading to user story-based testing.
-
Test deliverables: Here we list the artifacts to be delivered after the testing project concludes. Deliverables must be explicitly listed in the test plan.
What are test artifacts?
Test artifacts are the output work products of testing activities.
Examples include:
- outputs of test planning,
- outputs of test monitoring and control,
- outputs of test analysis,
- outputs of technical test design,
- outputs of test implementation,
- outputs of test execution,
- outputs of test closure.
-
Entry and exit criteria:
- Entry criteria are conditions that must be met before testing can begin. Examples include resource availability (people, tools, environments, test data, budget, time), testability (test basis, testable requirements, user stories, test cases), and initial quality of the test item (e.g., smoke tests passed).
- Exit criteria are conditions that define when the team stops testing. Typical examples include measures of thoroughness (coverage level, number of unresolved defects, defect density, failed tests) and completion conditions (planned tests executed, static testing performed, all found defects reported, all regression tests automated). Exhaustion of time or budget can also be an exit criterion, provided stakeholders review and accept the risk of releasing with less testing.
-
In agile development, exit criteria are often called the Definition of Done, specifying objective metrics for a releasable item. Entry criteria for starting development/testing of a user story are called the Definition of Ready.
Define only exit criteria controlled by testers!
Do not define exit criteria stating that a certain percentage of tests must pass. Testers are tasked with finding defects; fixing them—if testing is a separate project—is not their responsibility.
What are smoke tests?
A smoke test checks the basic functionality of a software application. Developers or testers verify whether the application launches without critical errors or crashes. The term “smoke” comes from hardware testing, where devices were powered on to check for smoke or obvious failures. In software, it similarly verifies that the application loads, the UI appears, and core features work. Smoke testing is not a replacement for thorough testing, but an initial step in defect detection and quality assurance.
-
Metrics: Metrics help management monitor testing progress and efficiency against schedule and budget, and also evaluate testing effectiveness. They must be included in the test plan.
What metrics can we define during planning?
Commonly used metrics include:
- Project progress metrics (task completion, resource use, test effort)
- Test progress metrics (test case implementation, test environment setup, executed/not executed tests, passed/failed tests, execution time)
- Product quality metrics (availability, response time, mean time to failure)
- Defect metrics (number of found/fixed defects, defect density, defect detection rate)
- Risk metrics
- Coverage metrics
- Cost metrics
Fixing defects is not the tester’s job—or is it?
In classical testing projects, testing is a separate activity (except for developer testing). In agile, testers are part of the team, so measuring defect fix rates makes sense. In this course, we do not follow that model; although we strive for agility, the software under test remains independent of the testers. Therefore, collecting data on fixes is unnecessary.
-
Test data requirements: Testing software requires defining test cases derived from test conditions. Test cases must provide input in expected formats, especially when testing APIs or performing integration tests. The expected data format (schema, types, etc.) must be specified in the test plan. This requires prior study of the software under test.
-
Test environment requirements: The test environment is the hardware and software environment containing all components required for executing tests. It must specify minimum hardware requirements, OS/runtime specifications, automation/management tools, and their requirements.
-
Deviations from guidelines: If the organization has test strategies or policies, the test plan must align with them. However, deviations may be necessary to ensure successful testing. These deviations and their rationale must be listed.
-
Budget and schedule: Although practice work does not require budgets, in real projects this is an essential element of every plan, including test plans. Scheduling is required even here, so submitted test plans must include a section on scheduling. A Gantt chart is a clear way to illustrate scheduling, showing dependencies and overlaps between tasks, aiding progress evaluation and resource allocation.
-
Responsibilities: Testing tasks can be distributed among the team. In agile this usually happens via ticket assignment rather than predefined roles, but explicit allocation is possible. If responsibilities are distributed, create a table listing all testers and their roles. Always ensure team-level responsibility with a reviewer checking completion and quality.
-
Approver: In practice projects, the instructor is the approver.
An example test plan is available at this link.
Creating a test plan is a concrete task
Open an issue in GitLab for the test plan, enter the required details, and assign the related milestone (the first milestone covering preparation and planning). Since the course aims to provide agile experience, you must also estimate effort and tag the issue with the results. The test plan is to be uploaded to the wiki page, the issue should only contain the meta information about it.
Bug Reports¶
The primary purpose of a bug report is to inform developers about where and what defect was found so they can fix it. Fixing is only possible if developers can reproduce the issue and determine its root cause. Therefore, a bug report must include all information needed for reproduction. Administrative details may also be required, and the fixing process and status must be traceable.
A possible content of bug reports includes:
- Unique identifier
- Should include project ID, artifact type, serial number, and date.
- Title and short description
- Date, author, approver
- Test item
- Exact name, version,