AI Skill Report Card

Writing Automated Tests

C+68·Feb 3, 2026
YAML
--- name: writing-automated-tests description: Writes comprehensive automated test suites for software applications. Use when you need to create unit tests, integration tests, or end-to-end test automation. ---

Automated Test Writing

Python
# Unit test example import unittest from unittest.mock import Mock, patch class TestUserService(unittest.TestCase): def setUp(self): self.user_service = UserService() def test_create_user_success(self): # Arrange user_data = {"name": "John", "email": "john@example.com"} # Act result = self.user_service.create_user(user_data) # Assert self.assertTrue(result.success) self.assertEqual(result.user.name, "John")
Recommendation
Add concrete before/after examples showing actual test failures and how the written tests catch real bugs

Progress:

  • Analyze requirements and identify test scenarios
  • Set up test environment and dependencies
  • Write unit tests for individual components
  • Create integration tests for component interactions
  • Implement end-to-end tests for user workflows
  • Add test data fixtures and mocks
  • Configure test execution and reporting
  • Review test coverage and edge cases

Test Planning

  1. Identify test boundaries - What to test vs. what to mock
  2. Define test categories - Unit, integration, E2E, performance
  3. Create test data strategy - Fixtures, factories, or generators
  4. Plan test organization - File structure and naming conventions

Implementation Steps

  1. Start with happy path - Core functionality working correctly
  2. Add edge cases - Boundary conditions and error scenarios
  3. Mock external dependencies - APIs, databases, file systems
  4. Parameterize tests - Multiple inputs with single test logic
  5. Add cleanup - Teardown methods and resource management
Recommendation
Include test configuration templates (pytest.ini, jest.config.js, etc.) and CI/CD integration snippets

Example 1: API Integration Test Input: Testing user registration endpoint Output:

Python
@pytest.mark.integration def test_user_registration_api(): client = TestClient(app) response = client.post("/users", json={ "email": "test@example.com", "password": "SecurePass123" }) assert response.status_code == 201 assert response.json()["email"] == "test@example.com" assert "password" not in response.json()

Example 2: Parameterized Test Input: Testing email validation with multiple cases Output:

Python
@pytest.mark.parametrize("email,valid", [ ("user@domain.com", True), ("invalid-email", False), ("", False), ("user@", False), ("@domain.com", False) ]) def test_email_validation(email, valid): result = validate_email(email) assert result == valid

Example 3: E2E Test with Selenium Input: Testing login workflow Output:

Python
def test_user_login_flow(driver): driver.get("http://localhost:3000/login") driver.find_element(By.ID, "email").send_keys("test@example.com") driver.find_element(By.ID, "password").send_keys("password123") driver.find_element(By.ID, "login-btn").click() assert driver.current_url.endswith("/dashboard") assert driver.find_element(By.ID, "welcome-message").is_displayed()
Recommendation
Remove the verbose 'Best Practices' section and integrate key points into the workflow steps instead

Test Structure

  • Use Arrange-Act-Assert pattern for clarity
  • One assertion per test when possible
  • Descriptive test names that explain the scenario
  • Group related tests in classes or modules

Test Data

  • Use factories for complex object creation
  • Create reusable fixtures for common setups
  • Isolate test data to prevent cross-test contamination
  • Use realistic but non-sensitive test data

Mocking Strategy

  • Mock external dependencies (APIs, databases, file systems)
  • Mock at the boundary of your system
  • Verify mock interactions when behavior matters
  • Use dependency injection to make mocking easier

Test Maintenance

  • Keep tests simple and focused
  • Refactor tests when production code changes
  • Remove or update obsolete tests
  • Monitor and maintain test execution speed

Over-mocking - Mocking internal implementation details instead of external dependencies Brittle tests - Tests that break with minor code changes unrelated to functionality Test pollution - Tests that depend on execution order or shared state Poor test data - Using production data or hardcoded values that become stale Missing negative tests - Only testing happy paths without error scenarios Slow test suites - Not using proper mocking, running unnecessary integrations Unclear test failures - Assertions without descriptive error messages Testing implementation - Testing how code works instead of what it does

0
Grade C+AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15