AI Skill Report Card

Test Design and Execution Skill

A-88·Jan 10, 2026

This skill enables AI to design, structure, and execute comprehensive tests across various domains including software, processes, hypotheses, and systems. It encompasses creating test cases, defining success criteria, implementing validation methods, and interpreting results to drive informed decisions.

1. Test Planning Phase

  • Define Objectives: Clearly articulate what you're testing and why
  • Identify Scope: Determine boundaries, inclusions, and exclusions
  • Set Success Criteria: Establish measurable, specific outcomes
  • Risk Assessment: Identify potential failure points and mitigation strategies

2. Test Design Phase

  • Create Test Cases: Develop specific scenarios covering normal, edge, and error conditions
  • Design Test Data: Prepare realistic, representative datasets
  • Select Test Methods: Choose appropriate testing approaches (unit, integration, system, etc.)
  • Plan Test Environment: Specify required tools, systems, and conditions

3. Test Execution Phase

  • Execute Systematically: Follow planned sequence and document each step
  • Monitor Real-time: Observe system behavior and capture anomalies
  • Collect Data: Gather quantitative and qualitative evidence
  • Document Results: Record outcomes, observations, and unexpected behaviors

4. Test Analysis Phase

  • Compare Against Criteria: Evaluate results against predefined success metrics
  • Identify Patterns: Look for trends, correlations, and root causes
  • Assess Impact: Determine significance of findings
  • Generate Recommendations: Propose actionable next steps

Planning Excellence

  • Start with clear, testable hypotheses
  • Include both positive and negative test cases
  • Plan for test data cleanup and environment reset
  • Build in time for unexpected discoveries

Execution Discipline

  • Follow documented procedures consistently
  • Maintain detailed logs with timestamps
  • Test one variable at a time when possible
  • Preserve original data and environments

Documentation Standards

  • Use consistent naming conventions
  • Include context and assumptions
  • Document both expected and actual results
  • Maintain traceability between requirements and tests

Quality Assurance

  • Peer review test designs before execution
  • Validate test tools and environments first
  • Run baseline tests to establish known good states
  • Implement version control for test artifacts

Basic Test Case Template

TEST ID: [Unique identifier]
OBJECTIVE: [What this test validates]
PRECONDITIONS: [Required setup]
TEST STEPS:
1. [Action]
2. [Action]
3. [Verification]
EXPECTED RESULT: [Specific outcome]
ACTUAL RESULT: [To be filled during execution]
STATUS: [Pass/Fail/Blocked]
NOTES: [Additional observations]

Test Plan Template

# Test Plan: [Name]
  • In Scope: [What will be tested]
  • Out of Scope: [What won't be tested]
  • [Primary goal]
  • [Secondary goals]
  • [Measurable criterion 1]
  • [Measurable criterion 2]
  • [Testing strategy]
  • [Tools and methods]
  • Risk: [Description] | Mitigation: [Strategy]
  • Planning: [Dates]
  • Execution: [Dates]
  • Analysis: [Dates]

Good Test Design

TEST: User Login Validation
OBJECTIVE: Verify system correctly authenticates valid users
STEPS:
1. Navigate to login page
2. Enter valid username: "testuser@company.com"
3. Enter valid password: "SecurePass123!"
4. Click "Login" button
5. Verify redirect to dashboard occurs within 3 seconds
EXPECTED: User successfully logged in, dashboard displayed

Poor Test Design

TEST: Login Test
OBJECTIVE: Test login
STEPS:
1. Try to login
2. See if it works
EXPECTED: It should work

Comprehensive Test Coverage

  • Happy Path: Normal, expected user behavior
  • Edge Cases: Boundary conditions and limits
  • Error Conditions: Invalid inputs and system failures
  • Security Tests: Authorization and data protection
  • Performance Tests: Load, stress, and response times

Inadequate Coverage

  • Only testing successful scenarios
  • Ignoring boundary conditions
  • Missing error handling validation
  • No performance considerations
Recommendation
Add specific metrics and quantitative success criteria examples (e.g., 'response time < 2 seconds' vs 'fast response')

Planning Mistakes

  • Don't start testing without clear objectives
  • Don't skip test case documentation "to save time"
  • Don't test everything at once without isolation
  • Don't ignore environment differences

Execution Errors

  • Don't deviate from planned test steps without documentation
  • Don't assume failed tests are always product defects
  • Don't skip cleaning up test data between runs
  • Don't ignore unexpected results as "probably nothing"

Analysis Pitfalls

  • Don't conclude causation from correlation alone
  • Don't dismiss edge case failures as unimportant
  • Don't make recommendations without supporting evidence
  • Don't forget to validate your testing tools and methods

Communication Failures

  • Don't report results without context
  • Don't use technical jargon without explanation for stakeholders
  • Don't hide negative results or unexpected findings
  • Don't make promises about future behavior based on limited testing

Process Violations

  • Don't change multiple variables simultaneously
  • Don't rely on memory instead of documented procedures
  • Don't skip retesting after making changes
  • Don't assume previous test results remain valid indefinitely
0
Grade A-AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15