Loading...

Quality Control Training Manual

Prepared by: ITA

What is Manual Testing?

Manual testing is a process where testers manually execute test cases without automation tools to verify software functionality. It ensures the application meets business and user requirements. Testers check usability, performance, security, and overall system behavior. Common types include integration, relational, feature, and system testing. It helps detect UI and usability issues. Manual testing is useful for exploratory, ad-hoc, and early-stage testing. It ensures data integrity, correct system interactions, and smooth user experiences. Testers follow predefined test cases and document defects. It requires analytical skills, attention to detail, and knowledge of databases and APIs. Manual testing is crucial for delivering a stable and high-quality application.

This document provides a structured manual testing guideline focusing on -

  1. Integration Testing
  2. Releational Testing
  3. System Testing

It ensures that new testers can efficiently understand, execute and validate test cases to guarantee the stability and reliability of the application.

Integration Testing

Definition

Integration testing verifies the interaction between different modules of the application to ensure they work together as expected. This testing phase detects defects in data flow, API calls, and communication between components.

Purpose

  • To ensure seamless communication between integrated modules
  • To detect interface mismatch between different components
  • To varify data flow between modules
  • To validate API calls

Pre-condition

  • Unit testing done for each modules
  • Cofigured database
  • API endpoints configured and documented
  • Stable test information that reflects production environment

Execution steps

  • Identify the integrated modules in system
  • Define the inputs and expected outputs for the integrate modules
  • Prepare the possible test cases and check accordingly
  • Validate the outputs and records the defects if any
  • Retest after solving the defects

Test case example

# Features Related features Pre-condiotions Test steps Expected results Pass / Fail
1 Site Registration Planner Login Enter valid credentials Successfully login Display site registration page Pass
2 Site Registration N/A Successfully login Enter unique site domain Unique site domain registered Pass
3 Site Registration N/A Successfully login Select license source from pulldown option License source selected Pass

Relational Testing

Definition

Relational testing ensures that data integrity and relationships between various database entities are maintained correctly. It is crucial for verifying foreign key constraints, cascading updates, and data consistency.

Purpose

  • To ensure relationship in database tables
  • To verify foreign key constraints
  • To check cascading delete and update operations
  • To prevent data integrity issues

Pre-condition

  • Properly database designed
  • Test data should be inserted before execution
  • Defined data tables relationships

Execution steps

  • Identify key relationships between database tables
  • Design test cases to validate relational constraints
  • Execute CRUD (Create, Read, Update, Delete) operations and verify integrity
  • Record defects if data inconsistencies are found
  • Re-execute tests after issue resolution

Test case example

# Test Scenarios Test steps Expected results Pass / Fail
1 Verify foreign key constraints in site table Try to register a site with a invalid field System should prevent insertion Pass
2 Verify cascading delete in site table Delete a site record Related site should be deleted Pass

Feature Testing

Definition

Feature testing is a software testing process that ensures specific features of an application function as intended. It validates that the feature meets both functional and business requirements.

Purpose

  • To verify that the implemented feature meets business and user requirements
  • To ensure that the feature functions correctly under different scenarios
  • To detect defects related to usability, functionality, and performance
  • To validate feature interactions with other modules

Pre-condition

  • The feature should be fully developed and ready for testing
  • All dependencies, such as databases and APIs, should be available and functional
  • The test environment should closely resemble the production setup
  • Test cases should be well-defined with expected outcomes

Execution steps

  • Identify the feature to be tested
  • Define test cases based on functional and business requirements
  • Execute the test cases and analyze the results
  • Record and report defects if any discrepancies are found
  • Retest after defect resolution to ensure expected functionality

Test case example

# Step details Expected results Actual results Pass / Fail
1 Navigate to the Planner Login page Planner Login page is displayed with input fields for 'Planner ID' and 'Password', along with Clear and Login buttons As expected Pass
2 Leave both input fields empty and click the Login button Error messages 'Planner ID is required' and 'Password is required' are displayed, and input fields have red borders As expected Pass

System Testing

Definition

System testing is a high-level testing phase that validates the complete and fully integrated application to ensure it meets the specified requirements. This testing phase checks the system’s overall behavior, functionality, security, performance, and compliance.

Purpose

  • To verify that the entire system functions as expected
  • To ensure compliance with business and functional requirements
  • To validate system interactions with external components
  • To test end-to-end business flows
  • To identify performance, security, and usability issues

Pre-condition

  • All modules should be successfully integrated
  • Integration testing should be completed without major defects
  • The test environment should mirror the production environment
  • Test data should be prepared for various test scenarios
  • Functional, performance, and security requirements should be well-defined

Execution steps

  • Requirement Analysis – Understand the functional and non-functional requirements of the system
  • Test Planning – Define test objectives, scope, resources, and schedule
  • Test Case Development – Prepare detailed test cases covering all system aspects
  • Test Environment Setup – Ensure the testing environment is configured properly
  • Test Execution – Execute test cases and document the results
  • Defect Logging & Reporting – Identify, log, and track defects until resolution
  • Regression Testing – Retest the system after defect fixes to ensure stability
  • Final Validation & Sign-Off – Validate test results and prepare a final report

Test case example

# Features Related features Pre-condiotions Test steps Expected results Pass / Fail
1 Planner Login Navigate to valid URL Browse with the proper URL in browser Redirect to Planner Login page Pass
2 Planner Login Navigate to valid URL Browse with an invalid URL in browser Don't redirect to target destination Pass

Risks & Challenges In Testing

  • Complex integration scenarios may require extensive debugging
  • Database inconsistencies could lead to application failures
  • Dynamic data dependencies make it challenging to define static test cases
  • High maintenance effort for constantly evolving software

Tester's Expertise & Efficiency

  • Technical Knowledge: Understanding of SQL, API, and debugging tools
  • Analytical Skills: Ability to analyze and break down complex scenarios
  • Attention to Detail: Identifying hidden defects
  • Automation Awareness: Understanding when and how to implement automated tests
  • Efficiency in Execution: Optimizing test case execution for time efficiency