AI Skill Report Card

Analyzing Development Patterns

B-72·Feb 2, 2026
Bash
# Analyze last epic's patterns git log --grep="MKPLS-XXX" --oneline --stat /analyze-chat --epic=MKPLS-XXX # Generate improvement recommendations echo "## Process Analysis: MKPLS-XXX" > analysis.md echo "Evidence: [findings]" >> analysis.md echo "Recommendations: [actions]" >> analysis.md
Recommendation
Replace abstract git commands with actual concrete analysis examples showing real pattern findings with specific data

Progress:

  • Gather Data: Git history + conversation transcripts + metrics
  • Identify Patterns: Success patterns vs anti-patterns
  • Extract Insights: Root causes with evidence
  • Generate Recommendations: Specific, actionable improvements
  • Create Report: Structured findings with priorities
  • Track Impact: Measure before/after effectiveness

1. Data Collection

Git History Analysis:

Bash
# Epic commits and patterns git log --grep="MKPLS-XXX" --oneline --stat git branch -a | grep MKPLS-XXX # Quality trends git log --format="%s" | grep "tests pass"

Conversation Analysis:

Bash
# Agent effectiveness and tool usage /analyze-chat --date-range="2024-01-01,2024-01-15" # Extract metrics: # - Agent delegation patterns # - Skill adoption rates # - Tool usage ratios # - Cache efficiency

Metrics Collection:

  • Test coverage trends
  • Build success rates
  • Time to resolution
  • Rule adherence frequency

2. Pattern Recognition

Success Patterns (reinforce):

  • Effective TDD workflows
  • Clean agent delegation
  • High rule adherence
  • Strong cross-repo coordination

Anti-Patterns (address):

  • Repeated rule violations
  • Inefficient tool usage
  • Communication breakdowns
  • Quality regressions

3. Report Generation

Template structure:

Markdown
# Process Analysis: [Scope]
Recommendation
The Quick Start section jumps to bash commands without showing what actual output or insights would look like - need immediate value demonstration

[Key findings and impact]

Pattern: [Name]

Evidence: [Data points] Frequency: [X instances] Impact: [Measurable outcome] Recommendation: [How to reinforce]

Pattern: [Name]

Evidence: [Data points] Root Cause: [Analysis] Impact: [Cost/delay] Recommendation: [Specific fix]

  • HIGH: [High impact, low effort]
  • MEDIUM: [Medium impact]
  • LOW: [Nice to have]

Example 1: Agent Effectiveness Analysis Input: /analyze-chat --epic=MKPLS-211 Output:

Agent Usage:
- backend-specialist: 25/30 (83%) ⚠️ Dominant
- test-specialist: 3/30 (10%) ⚠️ Underused

Issue: Test agent underutilized
Evidence: 7 test files modified by backend-specialist
Root Cause: Unclear delegation rules
Action: Update agent-delegation.md with "test-only tasks → test-specialist"
Expected Impact: 35% test agent utilization, improved test quality

Example 2: TDD Adherence Pattern Input: Git history showing test-first violations Output:

Anti-Pattern: Tests written after implementation
Frequency: 8/12 features (67%)
Evidence: Implementation commits precede test commits
Impact: 3 bugs found in production, 2 days rework
Recommendation: Add pre-commit hook reminder "Run tests first"

Example 3: Tool Usage Optimization Input: High Bash:Read ratio (4:1) Output:

Anti-Pattern: Excessive file searching
Evidence: Bash:Read ratio 4:1 (expected <2:1)
Root Cause: Not reusing context efficiently
Action: Update rules for context reuse patterns
Expected Impact: 40% faster responses, lower token usage
Recommendation
Examples need more concrete input/output pairs - Example 1 shows good structure but needs real chat data and specific git commits as input

Evidence-Based Insights:

  • Quantify patterns with specific counts/percentages
  • Link recommendations to measurable outcomes
  • Show before/after comparisons when possible

Actionable Recommendations:

  • Specify exact rule changes with line numbers
  • Propose concrete workflow modifications
  • Suggest new skills/agents for recurring needs

Balanced Perspective:

  • Celebrate successes alongside improvements
  • Consider team context and constraints
  • Prioritize high-impact, low-effort changes

Incremental Improvement:

  • Focus on 1-2 key changes per cycle
  • Track adoption and effectiveness
  • Iterate based on results

Analysis paralysis: Don't spend more time analyzing than the work took ❌ Vague recommendations: "Improve testing" isn't actionable ❌ Change overload: Too many recommendations at once ❌ Problem-only focus: Balance issues with successes ❌ Ignoring context: Consider team capacity and priorities

0
Grade B-AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15