AI Skill Report Card
Testing Placeholder
Quick Start
Test Case 1: Basic functionality
Input: Simple test data
Expected: Pass/fail result
Actual: [Record actual result]
Workflow
- Define test scope - What are you testing?
- Create test data - Generate minimal viable inputs
- Set expectations - Define what success looks like
- Execute test - Run the test case
- Record results - Document pass/fail with details
- Iterate - Adjust based on findings
Progress:
- Test scope defined
- Test data prepared
- Success criteria set
- Test executed
- Results documented
Examples
Example 1: Input: "test" repeated multiple times Output: Structured test framework with clear steps
Example 2: Input: Vague requirements Output: Minimal viable test case that can be expanded
Best Practices
- Start with the simplest possible test
- One assertion per test case
- Use descriptive test names
- Document expected vs actual results
- Keep test data minimal but representative
Common Pitfalls
- Testing too many things at once
- Unclear success criteria
- Not documenting assumptions
- Skipping edge cases entirely
- Over-engineering simple tests