AI Skill Report Card
Generated Skill
YAML--- name: analyzing-codebase-architecture description: Analyzes large codebases to identify architectural issues, remove redundant tests, and suggest strategic refactors that improve maintainability through interface-based design. Use when reviewing legacy code or planning major refactoring initiatives. --- # Analyzing Codebase Architecture
Quick Start
Bash# Generate comprehensive codebase analysis find . -name "*.py" -o -name "*.js" -o -name "*.ts" | head -20 | xargs wc -l git log --oneline --since="6 months ago" --pretty=format:"%h %s" | head -10
Recommendation▾
Consider adding more specific examples
Workflow
Progress:
- Map Dependencies: Identify core modules and their relationships
- Interface Analysis: Find abstractions vs concrete implementations
- Test Audit: Locate redundant, tautological, or low-value tests
- Refactor Strategy: Prioritize changes by impact and risk
- Implementation Plan: Phase refactors with measurable improvements
1. Dependency Mapping
Python# Look for circular dependencies and tight coupling import ast import networkx as nx def analyze_imports(file_path): with open(file_path) as f: tree = ast.parse(f.read()) return [node.module for node in ast.walk(tree) if isinstance(node, ast.Import)]
2. Interface Identification
- Abstract Base Classes: Are concepts properly abstracted?
- Dependency Injection: Can components be swapped easily?
- Single Responsibility: Does each module have one clear purpose?
3. Test Quality Audit
Python# Identify tautological tests def find_redundant_tests(test_file): """ Flags tests that: - Test framework behavior (not business logic) - Duplicate other tests - Test trivial getters/setters - Mock everything (no real behavior tested) """ pass
4. Strategic Refactor Planning
Priority order:
- High-impact, Low-risk: Extract interfaces from concrete classes
- Medium-impact, Medium-risk: Consolidate duplicate logic
- High-impact, High-risk: Architectural pattern changes
Recommendation▾
Include edge cases
Examples
Example 1: Interface Extraction Input:
Pythonclass EmailService: def send_email(self, to, subject, body): # SMTP implementation pass
Output:
Pythonfrom abc import ABC, abstractmethod class NotificationService(ABC): @abstractmethod def send(self, recipient: str, subject: str, content: str) -> bool: pass class EmailNotificationService(NotificationService): def send(self, recipient: str, subject: str, content: str) -> bool: # SMTP implementation pass
Example 2: Tautological Test Removal Input:
Pythondef test_user_creation(): user = User("john", "doe") assert user.first_name == "john" # Tautological assert user.last_name == "doe" # Tautological
Output:
Pythondef test_user_creation_validation(): # Test actual business logic with pytest.raises(ValidationError): User("", "doe") # Empty first name should fail
Best Practices
- Measure Before Refactoring: Use cyclomatic complexity, test coverage, and build times as baselines
- Interface-First Design: Start with contracts, then implement
- Progressive Enhancement: Make small, verifiable improvements
- Documentation as Code: Interfaces should be self-documenting
- DX Focus: Every change should make the developer experience measurably better
Common Pitfalls
- Over-abstraction: Don't create interfaces for single implementations
- Refactoring Without Tests: Ensure behavioral preservation
- Big Bang Rewrites: Incremental changes reduce risk
- Removing All Tests: Even tautological tests might catch regressions during refactoring
- Ignoring Performance: Beautiful code that's slow isn't better