AI Skill Report Card

Software Testing

A92·Jan 10, 2026

Software Testing Skill

This skill enables systematic evaluation of software applications to identify defects, verify functionality, and ensure quality standards are met before release. It combines analytical thinking, technical knowledge, and methodical processes to deliver reliable software products.

1. Test Planning & Analysis

  • Understand Requirements: Review functional and non-functional requirements
  • Risk Assessment: Identify high-risk areas and critical user paths
  • Test Strategy: Define scope, approach, resources, and timeline
  • Environment Setup: Prepare test data, tools, and testing environments

2. Test Case Design

  • Equivalence Partitioning: Group inputs into valid and invalid classes
  • Boundary Value Analysis: Test edge cases and limits
  • Decision Table Testing: Cover complex business logic combinations
  • State Transition Testing: Verify system behavior across different states

3. Test Execution

  • Sequential Execution: Follow test cases methodically
  • Defect Documentation: Record bugs with clear reproduction steps
  • Evidence Capture: Screenshots, logs, and detailed observations
  • Regression Testing: Verify fixes don't break existing functionality

4. Reporting & Communication

  • Status Updates: Regular progress and coverage reports
  • Risk Communication: Highlight critical issues to stakeholders
  • Metrics Tracking: Defect density, test coverage, pass/fail rates

Documentation Standards

  • Write clear, repeatable test cases with expected results
  • Use consistent naming conventions for test cases and defects
  • Maintain traceability between requirements and test cases
  • Keep test data and environment configurations documented

Execution Excellence

  • Test early and test often throughout development cycles
  • Prioritize testing based on risk and business impact
  • Use both positive and negative test scenarios
  • Validate error messages and edge cases thoroughly

Communication

  • Report bugs with severity and priority clearly defined
  • Include environmental details in defect reports
  • Provide constructive feedback to development teams
  • Ask clarifying questions when requirements are ambiguous

Test Case Template

ID: TC_001
Title: [Descriptive test case name]
Preconditions: [System state before test]
Test Steps:
1. [Action to perform]
2. [Next action]
Expected Result: [What should happen]
Actual Result: [What actually happened]
Status: [Pass/Fail/Blocked]

Bug Report Template

Bug ID: BUG_001
Summary: [Brief description]
Severity: [Critical/High/Medium/Low]
Priority: [P1/P2/P3/P4]
Environment: [OS, Browser, Version]
Steps to Reproduce:
1. [Step 1]
2. [Step 2]
Expected: [What should happen]
Actual: [What happened instead]
Attachments: [Screenshots, logs]

Good Test Case

Title: Login with valid credentials Steps:

  1. Navigate to login page
  2. Enter valid username "testuser@example.com"
  3. Enter valid password "ValidPass123"
  4. Click Login button Expected: User redirected to dashboard with welcome message

Poor Test Case

Title: Test login Steps: Login with good data Expected: It works

Good Bug Report

Summary: Shopping cart total miscalculates with discount codes Steps: Add $50 item, apply 10% discount code "SAVE10", proceed to checkout Expected: Total shows $45.00 Actual: Total shows $55.00 (adds discount instead of subtracting)

Poor Bug Report

Summary: Cart is broken Description: Doesn't work right

Recommendation
Add specific automation testing section with tool recommendations (Selenium, Jest, Cypress) and when to use automated vs manual testing

Testing Approach

  • ❌ Don't test only happy path scenarios
  • ❌ Don't skip retesting after bug fixes
  • ❌ Don't assume obvious functionality works without testing
  • ❌ Don't test in production environments first

Documentation

  • ❌ Don't write vague test cases without specific steps
  • ❌ Don't report bugs without reproduction steps
  • ❌ Don't use subjective language ("looks weird", "seems slow")
  • ❌ Don't duplicate existing test cases unnecessarily

Communication

  • ❌ Don't report bugs as personal criticism
  • ❌ Don't delay reporting critical issues
  • ❌ Don't assume developers understand implicit context
  • ❌ Don't mark tests as passed without proper verification

Process

  • ❌ Don't change multiple variables simultaneously when debugging
  • ❌ Don't skip smoke testing after deployments
  • ❌ Don't ignore intermittent issues as "random"
  • ❌ Don't test without understanding the business requirements
0
Grade AAI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15