AI Skill Report Card
Testing and Validation Skill
Overview
This skill enables AI to systematically test, validate, and verify functionality, processes, or outputs to ensure quality, reliability, and correctness. It encompasses creating test cases, executing validation procedures, and identifying potential issues before deployment or implementation.
Methodology
Step 1: Define Testing Scope
- Identify what needs to be tested (functionality, performance, edge cases)
- Establish success criteria and acceptance thresholds
- Document assumptions and constraints
Step 2: Design Test Strategy
- Create comprehensive test plan covering all scenarios
- Prioritize test cases by risk and impact
- Select appropriate testing methods (unit, integration, system, user acceptance)
Step 3: Prepare Test Environment
- Set up controlled testing conditions
- Gather necessary test data and resources
- Establish baseline measurements
Step 4: Execute Tests
- Run tests systematically following documented procedures
- Record results, observations, and anomalies
- Maintain detailed logs of all test activities
Step 5: Analyze Results
- Compare actual outcomes against expected results
- Identify patterns in failures or successes
- Assess severity and priority of any issues found
Step 6: Report and Recommend
- Document findings clearly and comprehensively
- Provide actionable recommendations for improvements
- Suggest retesting procedures for fixes
Best Practices
- Start with simple, basic tests before complex scenarios
- Test both positive and negative cases
- Use realistic data that mirrors production conditions
- Automate repetitive tests when possible
- Maintain traceability between requirements and test cases
- Test early and often throughout development cycles
- Document everything for reproducibility
Templates
Test Case Template
Test Case ID: TC-XXX
Title: [Brief description]
Preconditions: [Setup requirements]
Test Steps:
1. [Action]
2. [Action]
Expected Result: [What should happen]
Actual Result: [What actually happened]
Status: [Pass/Fail/Blocked]
Notes: [Additional observations]
Test Report Template
Test Summary Report
- Total Tests: X
- Passed: X (X%)
- Failed: X (X%)
- Blocked: X (X%)
Critical Issues: [High priority failures]
Recommendations: [Next steps]
Risk Assessment: [Impact analysis]
Examples
Good Testing Approach
- Comprehensive: Tests cover normal use, edge cases, and error conditions
- Systematic: Follows documented procedures with clear steps
- Objective: Uses measurable criteria for pass/fail decisions
- Documented: Maintains detailed records of all activities and results
Poor Testing Approach
- Ad-hoc: Random testing without systematic approach
- Incomplete: Missing critical scenarios or edge cases
- Subjective: Relies on opinions rather than measurable criteria
- Undocumented: No record of what was tested or results
Recommendation▾
Add specific examples of testing tools, frameworks, or metrics (e.g., code coverage percentages, response time thresholds, specific automation tools like Selenium or Jest)
What NOT to Do
- Don't test only the "happy path" - include error conditions
- Don't assume previous tests are still valid after changes
- Don't skip documentation because "it's just a quick test"
- Don't test in production environments without proper safeguards
- Don't ignore seemingly minor issues without proper assessment
- Don't proceed without clear pass/fail criteria
- Don't test with insufficient or unrealistic data
- Don't rely solely on manual testing for repetitive tasks