AI Skill Report Card
Basic Test Execution Skill
Overview
This skill enables AI to perform systematic testing and validation of outputs, processes, or systems to ensure quality, functionality, and reliability before deployment or finalization.
Methodology
1. Test Planning
- Define clear test objectives and success criteria
- Identify what needs to be tested (functionality, performance, edge cases)
- Determine test scope and boundaries
- Create test scenarios covering normal, boundary, and error conditions
2. Test Design
- Develop specific test cases with expected outcomes
- Create input variations (valid, invalid, edge cases)
- Design verification steps
- Prepare test data and environment setup
3. Test Execution
- Run tests systematically in logical order
- Document actual results vs expected results
- Note any anomalies or unexpected behaviors
- Capture evidence (logs, screenshots, outputs)
4. Result Analysis
- Compare outcomes against success criteria
- Identify patterns in failures or issues
- Categorize problems by severity and impact
- Determine root causes when possible
5. Reporting
- Summarize findings clearly and concisely
- Highlight critical issues requiring immediate attention
- Provide recommendations for improvements
- Document lessons learned for future testing
Best Practices
- Start Simple: Begin with basic happy path scenarios before complex edge cases
- Be Systematic: Follow a consistent testing approach across all scenarios
- Document Everything: Record both successes and failures with details
- Test Early and Often: Don't wait until the end to validate functionality
- Think Like an End User: Consider real-world usage patterns
- Isolate Variables: Test one thing at a time when possible
Templates
Basic Test Case Template
Test ID: [Unique identifier]
Objective: [What is being tested]
Prerequisites: [Required setup/conditions]
Steps: [Numbered execution steps]
Expected Result: [What should happen]
Actual Result: [What actually happened]
Status: [Pass/Fail/Blocked]
Notes: [Additional observations]
Test Summary Template
Test Summary
- Total Tests: [Number]
- Passed: [Number] ([Percentage])
- Failed: [Number] ([Percentage])
- Blocked: [Number] ([Percentage])
Critical Issues
[List of high-priority problems]
Recommendations
[Suggested next steps]
Examples
Good Test Execution
Input: Testing a calculator function with multiple scenarios
- Tests basic operations (2+2=4)
- Tests edge cases (division by zero)
- Tests input validation (non-numeric inputs)
- Documents clear pass/fail criteria
- Provides specific error messages for failures
Poor Test Execution
Input: Testing the same calculator function
- Only tests one happy path scenario
- Doesn't consider edge cases or error conditions
- Vague pass/fail criteria ("it works")
- No documentation of test steps or rationale
Recommendation▾
Add specific examples of test data sets and edge cases for different domains (e.g., web applications, data processing, APIs) to make the skill more immediately applicable
What NOT to Do
- Don't Skip Edge Cases: Ignoring boundary conditions often reveals critical flaws
- Don't Test Everything at Once: Avoid combining multiple variables that make diagnosis difficult
- Don't Ignore "Small" Failures: Minor issues can compound into major problems
- Don't Test Without Clear Criteria: Subjective "it looks good" assessments aren't actionable
- Don't Forget to Document: Undocumented tests provide no learning value for future iterations
- Don't Test in Production: Always use safe environments for testing when possible
- Don't Assume Previous Tests Still Pass: Re-validate when making changes to related components