AI Skill Report Card

Creating Test Strategies

C+62·Feb 13, 2026·Source: Extension-page
8 / 15
Python
# Basic test strategy template test_strategy = { "scope": "What are we testing?", "objectives": ["Verify functionality", "Ensure performance", "Check security"], "test_types": ["Unit", "Integration", "System", "Acceptance"], "entry_criteria": ["Code complete", "Environment ready"], "exit_criteria": ["95% pass rate", "No critical bugs"], "resources": ["Tools", "Environment", "Personnel"] }
Recommendation
The Quick Start template is too abstract - provide a concrete example like 'Testing login functionality for banking app' with actual test cases instead of generic placeholders
12 / 15

Progress:

  • Define test scope and objectives
  • Identify test types and levels
  • Create test cases and scenarios
  • Set up test environment
  • Execute tests and track results
  • Report findings and recommendations

Step-by-Step Process

  1. Scope Definition

    • Identify what needs testing (features, systems, processes)
    • Define boundaries (what's included/excluded)
    • Document assumptions and constraints
  2. Test Planning

    • Choose appropriate test types (functional, non-functional)
    • Define test levels (unit, integration, system, acceptance)
    • Establish entry/exit criteria
  3. Test Case Design

    • Write detailed test scenarios
    • Include positive and negative test cases
    • Cover edge cases and error conditions
  4. Execution Strategy

    • Set up test environment
    • Execute tests systematically
    • Log defects and track progress
Recommendation
Examples need actual input/output pairs - show specific test scenarios with expected results rather than bullet point lists
12 / 20

Example 1: Web Application Testing Input: E-commerce checkout process Output:

  • Test login functionality
  • Verify cart operations (add/remove items)
  • Test payment processing
  • Check order confirmation
  • Validate error handling

Example 2: API Testing Strategy Input: REST API for user management Output:

  • Authentication tests (valid/invalid tokens)
  • CRUD operations (Create, Read, Update, Delete)
  • Input validation (boundary values, invalid data)
  • Response validation (status codes, data format)
  • Performance testing (load, stress)
Recommendation
Add concrete templates or frameworks (like BDD scenarios, test case formats, or risk assessment matrices) instead of just listing concepts
  • Risk-based testing: Prioritize high-risk areas
  • Traceability: Link test cases to requirements
  • Automation: Automate repetitive tests
  • Documentation: Maintain clear test artifacts
  • Continuous improvement: Learn from each cycle
  • Testing too late in development cycle
  • Insufficient test data or environment setup
  • Focusing only on happy path scenarios
  • Not testing error conditions and edge cases
  • Poor communication of test results
  • Skipping regression testing after changes
0
Grade C+AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
8/15
Workflow
12/15
Examples
12/20
Completeness
5/20
Format
15/15
Conciseness
10/15