Collaboration-Based Testing and Experience-Based Techniques¶
Collaboration-Based Testing Approaches¶
The goal of collaboration-based testing is to ensure that different roles work in a coordinated way, using shared communication to prevent defects and build a common understanding of how the product works.
- Testing is the responsibility of the entire team.
- Testers actively participate in clarifying requirements.
- Early involvement (Shift Left) reduces defect costs.
Review and collaboration practices (Three Amigos)¶

The three perspectives:
- Business – what should the system do?
- Development – how can it be implemented technically?
- Testing – how can it be verified? what are the risks?
Testers take part in preventing defects, not only detecting them.
Acceptance criteria clarification¶
Test design and requirement interpretation should start from examples and concrete test inputs.
The three elements:
- Rule – business logic.
- Example – testable situations.
- Question – requirements that need clarification.
Attributes of a “good requirement” and “testability”:
- unambiguous,
- verifiable,
- consistent.
Examples help achieve these.
Three Amigos
Function: "Password reset via email."
-
Business perspective
-
“The user should receive a reset link valid for 1 hour.”
-
“The link should be single-use.”
-
Developer perspective
-
“The token will be a 64-character random string.”
-
“After activation, the token status changes to ‘used’.”
-
Tester perspective
-
What kinds of defects may occur?
- Token expiration: “What happens after 61 minutes?”
- Token reuse: “Should a second attempt show an error?”
- Invalid token: “How does the system respond to a manipulated link?”
Testability:
- Event logging (audit)?
- Consistent error messages?
Result:
- Refined requirements
- Clarified business rules
- Newly identified edge cases
- Ensured testability
Acceptance criteria clarification
Function: "The system shall lock the user after three failed login attempts."
Rules:
- If the user enters an incorrect password 3 times, the account is locked.
- The lock lasts 30 minutes.
- During the lock the user cannot attempt login again.
Examples:
- E1: 1 failed attempt → No lock needed.
- E2: 3 consecutive failed attempts → account locked.
- E3: 3 failed attempts from 2 different devices → lock must still occur.
- E4: 4th attempt during lock → system informs the user that the account is locked.
Questions:
- After how long does the counter reset if fewer than 3 attempts occurred?
- Does the lock duration restart if the user requests a new login?
- Is the 30-minute lock configurable?
- Should the IP addresses of failed attempts be logged?
Result:
- The requirement becomes genuinely testable and consistent.
- The examples can later serve as the foundation for BDD scenarios.
- The questions reveal hidden risks.
Specification by Example (Behaviour Driven Development)¶
- BDD helps ensure testability.
- The Given–When–Then format serves as the basis of test cases.
- Scenarios create a “common language” between roles.
1 2 3 | |
During collaboration, the tester:
- identifies potential boundary values (Boundary testing),
- highlights possible equivalence classes,
- raises potential risks (Risk-based testing).
Experience-Based Techniques¶
These techniques are useful when
- requirements are incomplete,
- time is limited,
- fast defect discovery is needed,
- uncertainty is high,
- there is no detailed test documentation.
These methods rely on the tester’s expertise and intuition.
Error Guessing¶
Error guessing is a method based on the tester’s professional experience and assumed defect patterns.
Typical defect patterns:
- Handling of null values
- Mismatched formats
- Incorrect boundary handling
- Incorrect state transitions
- Poorly handled exceptions
Error guessing
“What happens if the user’s password is 0 characters long?”
Exploratory Testing¶
Exploratory testing is:
- simultaneous design, execution, and learning,
- structured but not script-based,
- goal-driven (charter),
- session-based (SBTM)
Key elements:
- Charter: short mission (e.g. “Test the payment flow with extreme data inputs”).
- Observation and adaptation.
- Notes (notes, findings).
Charter
“Investigate what input validations the system performs during registration.”
Using Checklists¶

This is one of the most important experience-based methods:
- more structured than exploratory testing,
- fast and time-efficient,
- captures standards or organizational know-how.
Example checklist items:
- Does each mandatory field have validation?
- Are messages clear and consistently displayed?
- Is error logging adequate?
- Can the operation be reset to default state?
Implicit Knowledge Pattern¶
Heuristics are cognitive patterns derived from experience.
Examples:
- Consistency oracle — looking for consistency
- State-based thinking — defects around state transitions
- Claims testing — does the system actually do what it claims?
- History-based heuristics — assumptions based on past defects
