AI Skill Report Card

Generated Skill

B-70·Jan 10, 2026

Test Execution and Quality Assurance Skill

This skill enables AI to systematically design, execute, and evaluate tests across various domains - from software testing to hypothesis validation to process verification. It focuses on creating comprehensive test strategies that identify issues, validate functionality, and ensure quality standards are met.

1. Test Planning Phase

  • Define Objectives: Clearly articulate what you're testing and why
  • Identify Scope: Determine boundaries, inclusions, and exclusions
  • Risk Assessment: Identify high-risk areas requiring thorough testing
  • Resource Planning: Estimate time, tools, and expertise needed

2. Test Design Phase

  • Create Test Cases: Develop specific, measurable test scenarios
  • Define Success Criteria: Establish clear pass/fail conditions
  • Input/Output Mapping: Specify expected inputs and desired outputs
  • Edge Case Identification: Consider boundary conditions and error states

3. Test Execution Phase

  • Environment Setup: Prepare testing conditions and dependencies
  • Sequential Execution: Run tests in logical order with proper isolation
  • Data Collection: Document results, observations, and anomalies
  • Issue Tracking: Record defects with reproduction steps

4. Analysis and Reporting Phase

  • Result Evaluation: Compare actual vs expected outcomes
  • Pattern Recognition: Identify trends and root causes
  • Risk Categorization: Prioritize issues by severity and impact
  • Recommendation Development: Propose solutions and improvements

Test Design

  • Use equivalence partitioning to reduce redundant test cases
  • Apply boundary value analysis for numerical inputs
  • Include both positive and negative test scenarios
  • Design tests to be independent and repeatable

Execution

  • Document everything - assumptions, environment, steps, results
  • Test early and often in iterative cycles
  • Maintain test data integrity between runs
  • Use automation for repetitive tasks

Quality Assurance

  • Peer review test plans before execution
  • Validate test results with subject matter experts
  • Maintain traceability between requirements and tests
  • Regular retrospectives to improve testing processes

Test Case Template

**Test ID**: TC_[Category]_[Number]
**Test Name**: [Descriptive name]
**Objective**: [What this test validates]
**Preconditions**: [Setup requirements]
**Test Steps**: 
1. [Action step]
2. [Action step]
**Expected Result**: [What should happen]
**Actual Result**: [What actually happened]
**Status**: [Pass/Fail/Blocked]
**Notes**: [Additional observations]

Test Plan Template

Scope

  • In Scope: [What will be tested]
  • Out of Scope: [What won't be tested]

Test Strategy

  • Approach: [Testing methodology]
  • Tools: [Testing tools and environments]
  • Schedule: [Timeline and milestones]

Risk Assessment

  • High Risk Areas: [Critical components]
  • Mitigation Strategies: [Risk reduction approaches]

Good Test Design

**Scenario**: Testing login functionality
**Test Case**: Valid user login
- Input: Registered username/password
- Expected: Successful login, redirect to dashboard
- Edge cases: Case sensitivity, special characters
- Error cases: Invalid credentials, locked accounts

Poor Test Design

**Scenario**: Testing login
**Test Case**: "Make sure login works"
- Input: "Some username and password"
- Expected: "Should work"

Good Test Execution

  • Clear step-by-step documentation
  • Consistent test environment
  • Detailed result logging
  • Immediate defect reporting

Poor Test Execution

  • Vague or missing steps
  • Mixed environment conditions
  • Incomplete result documentation
  • Delayed issue reporting
Recommendation
Consider adding more specific examples

Planning Mistakes

  • Don't skip requirements analysis - Testing without clear requirements leads to incomplete coverage
  • Don't test everything equally - Focus effort on high-risk, high-impact areas
  • Don't ignore environmental factors - Test conditions affect results validity

Execution Errors

  • Don't modify tests during execution - Changes invalidate results and reduce repeatability
  • Don't test multiple changes simultaneously - Isolate variables to identify root causes
  • Don't ignore unexpected results - "Working by accident" often indicates hidden issues

Analysis Pitfalls

  • Don't confuse correlation with causation - Ensure identified causes actually create observed effects
  • Don't dismiss edge cases - Boundary conditions often reveal critical flaws
  • Don't assume one success means universal success - Limited testing doesn't guarantee comprehensive quality

Documentation Failures

  • Don't use subjective language - "Seems fine" isn't measurable; use specific criteria
  • Don't skip negative results - Failed tests provide valuable information
  • Don't lose context - Include environment details, timing, and configuration information
0
Grade B-AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15