AI Skill Report Card
Test Creation and Execution Skill
Overview
This skill enables AI to design, create, and execute comprehensive tests across various domains including software, processes, knowledge validation, and system verification. It focuses on systematic evaluation methods that ensure quality, reliability, and performance standards are met.
Methodology
1. Test Planning
- Define clear test objectives and success criteria
- Identify scope and boundaries of what will be tested
- Determine test environment and resource requirements
- Establish timeline and deliverables
2. Test Design
- Create test cases that cover normal, edge, and error conditions
- Design test data sets (valid, invalid, boundary values)
- Develop test scenarios that simulate real-world usage
- Plan for both positive and negative test cases
3. Test Implementation
- Set up test environment and prerequisites
- Execute tests in logical sequence
- Document results with precise observations
- Capture evidence (screenshots, logs, measurements)
4. Result Analysis
- Compare actual vs expected outcomes
- Identify patterns in failures or successes
- Classify issues by severity and impact
- Generate actionable recommendations
5. Reporting and Follow-up
- Create clear, structured test reports
- Communicate findings to stakeholders
- Track issue resolution
- Validate fixes through retesting
Best Practices
- Start Simple: Begin with basic functionality before complex scenarios
- Be Systematic: Follow consistent testing procedures
- Document Everything: Record steps, inputs, outputs, and observations
- Think Like an End User: Test from the user's perspective
- Test Early and Often: Don't wait until the end to start testing
- Use Representative Data: Test with realistic, varied data sets
- Maintain Independence: Keep testing objective and unbiased
- Plan for Automation: Design tests that can be repeated efficiently
Templates
Basic Test Case Template
Test ID: [Unique identifier]
Test Name: [Descriptive name]
Objective: [What this test validates]
Prerequisites: [Setup requirements]
Test Steps:
1. [Action]
2. [Action]
3. [Action]
Expected Result: [What should happen]
Actual Result: [What actually happened]
Status: [Pass/Fail/Blocked]
Notes: [Additional observations]
Test Report Template
# Test Report: [Subject]
Summary
- Total Tests: [X]
- Passed: [X]
- Failed: [X]
- Blocked: [X]
Key Findings
- [Critical issues found]
- [Performance observations]
- [Usability concerns]
Recommendations
- [Priority action items]
- [Suggested improvements]
- [Next steps]
Examples
Good Test Example
Testing a login function:
- Tests valid credentials (should succeed)
- Tests invalid password (should fail with specific error)
- Tests empty fields (should prompt for input)
- Tests special characters in username
- Tests maximum password length
- Documents exact error messages received
Bad Test Example
Testing a login function:
- Only tests one valid login
- Doesn't verify error messages
- Uses only simple passwords
- Doesn't test edge cases
- Records only "pass/fail" without details
Good Test Data
- Covers boundary values (min/max lengths)
- Includes special characters and unicode
- Tests null/empty values
- Uses realistic production-like data
- Includes both valid and invalid inputs
Bad Test Data
- Only uses "happy path" data
- Ignores edge cases and boundaries
- Uses overly simplistic test values
- Doesn't reflect real-world complexity
Recommendation▾
Add specific metrics and measurement techniques for quantitative evaluation (e.g., response times, throughput benchmarks, error rates)
What NOT to Do
- Don't Skip Documentation: Never assume results will be remembered
- Don't Test Only Happy Paths: Always include error and edge cases
- Don't Rush Through Tests: Thorough testing requires time and attention
- Don't Ignore Small Issues: Minor problems can indicate larger concerns
- Don't Test in Production: Use appropriate test environments
- Don't Make Assumptions: Verify everything explicitly
- Don't Test Without Clear Criteria: Always know what constitutes success
- Don't Ignore Reproducibility: Ensure tests can be repeated consistently
- Don't Mix Testing with Development: Maintain objectivity
- Don't Overlook User Experience: Consider usability alongside functionality