Generating Claude Level Skills
Creates sophisticated AI skill files with deep reasoning workflows, tool integration, and Claude-level thinking patterns. Use when building advanced agent capabilities or converting expertise into structured skill formats.
Quick Start
YAML--- name: analyzing-market-trends description: Analyzes market data using multi-step reasoning, tool orchestration, and uncertainty quantification. Use when evaluating investment opportunities or market conditions. --- # Intent Analysis - Problem: Complex market analysis requires systematic data gathering, pattern recognition, and risk assessment - Users: Investment analysts, strategists, researchers - Output: Structured market assessment with confidence levels and actionable insights # Reasoning Workflow Step 1: Parse request → identify market, timeframe, specific metrics Step 2: Gather data → search recent reports, financial data, news Step 3: Pattern analysis → identify trends, correlations, anomalies Step 4: Risk assessment → evaluate uncertainties, alternative scenarios Step 5: Synthesis → combine insights with confidence weighting Step 6: Validation → cross-check conclusions against multiple sources Step 7: Output → structured report with recommendations # Tool Integration - **search**: Market data, news, reports (when: data gathering phase) - **generate_text**: Analysis summaries (when: synthesis phase) - **code**: Statistical analysis (when: pattern detection needed) Progress Tracking: - [ ] Data collection complete - [ ] Pattern analysis done - [ ] Risk factors identified - [ ] Cross-validation passed
Workflow
Before any skill creation, analyze:
- What real-world problem does this solve?
- Who are the end users and their expertise level?
- What does success look like?
- What tools/data are required?
Output: Clear problem statement and user requirements
Create the thinking pipeline:
Input Processing → Context Building → Multi-step Analysis →
Tool Orchestration → Uncertainty Handling → Output Synthesis
For each step, define:
- Trigger conditions
- Decision criteria
- Fallback strategies
- Quality checks
Identify required tools:
- search: When external data needed
- generate_text: When content creation required
- generate_image: When visual explanations help
- code: When computation/analysis needed
Define tool orchestration:
- Sequence of tool usage
- Error handling between tools
- Data flow between steps
Embed reasoning patterns:
- Explicit uncertainty acknowledgment
- Multiple hypothesis consideration
- Step-by-step validation
- Self-correction mechanisms
- Context retention across steps
Design robust handling for:
- Incomplete inputs → clarification requests
- Tool failures → alternative approaches
- Ambiguous data → uncertainty quantification
- Scope creep → boundary setting
Examples
Example 1: Expert Knowledge Conversion Input: "I'm a supply chain expert with 15 years optimizing logistics" Output:
YAML--- name: optimizing-supply-chains description: Applies systematic supply chain analysis using constraint theory, risk modeling, and cost optimization. Use when evaluating logistics efficiency or designing distribution networks. --- # Intent Analysis - Problem: Multi-variable supply chain optimization with real-world constraints - Users: Operations managers, logistics coordinators, procurement teams - Output: Actionable optimization plan with implementation roadmap # Reasoning Workflow Step 1: Map current state → inventory, routes, bottlenecks, costs Step 2: Identify constraints → capacity limits, regulatory requirements, SLAs Step 3: Model scenarios → demand variations, supply disruptions, cost changes Step 4: Optimize configuration → route planning, inventory positioning, supplier mix Step 5: Risk assessment → single points of failure, contingency planning Step 6: Implementation design → phased rollout, success metrics, monitoring Step 7: Validation → feasibility check, stakeholder alignment # Tool Integration - **search**: Industry benchmarks, regulatory updates, supplier data - **code**: Optimization algorithms, scenario modeling, cost analysis - **generate_text**: Implementation plans, stakeholder communications
Example 2: Technical Process Conversion Input: "I debug complex distributed systems by analyzing logs and traces" Output:
YAML--- name: debugging-distributed-systems description: Systematically diagnoses distributed system issues through log correlation, trace analysis, and hypothesis testing. Use when troubleshooting production incidents or performance problems. --- # Intent Analysis - Problem: Multi-service failures require systematic investigation and root cause identification - Users: SREs, backend engineers, DevOps teams - Output: Root cause analysis with remediation steps and prevention measures # Reasoning Workflow Step 1: Incident assessment → scope, impact, timeline, initial symptoms Step 2: Data gathering → logs, metrics, traces across affected services Step 3: Pattern recognition → error clustering, timing correlations, dependency mapping Step 4: Hypothesis formation → potential root causes ranked by likelihood Step 5: Systematic testing → validate/eliminate hypotheses with evidence Step 6: Root cause confirmation → definitive cause with supporting evidence Step 7: Solution design → immediate fixes, long-term prevention, monitoring # Tool Integration - **code**: Log parsing, metric analysis, trace correlation - **search**: Error documentation, similar incidents, service dependencies - **generate_text**: Incident reports, runbook updates, team communications
Best Practices
Reasoning Design:
- Always include uncertainty quantification
- Build in self-correction loops
- Design for incomplete information scenarios
- Include confidence levels in outputs
Tool Orchestration:
- Specify tool selection criteria explicitly
- Design graceful degradation paths
- Include tool validation steps
- Plan for tool unavailability
Workflow Structure:
- Start with broad analysis, narrow to specifics
- Include validation at each major step
- Design checkpoints for complex processes
- Enable restart from any checkpoint
Output Quality:
- Include confidence assessments
- Provide alternative approaches when uncertain
- Structure findings hierarchically
- Include actionable next steps
Common Pitfalls
Shallow Reasoning:
- Don't create linear instruction lists
- Avoid single-path thinking
- Don't skip uncertainty handling
- Don't ignore context dependencies
Poor Tool Integration:
- Don't bolt on tools as afterthoughts
- Avoid tool selection without clear criteria
- Don't ignore tool failure scenarios
- Don't create tool dependency chains without fallbacks
Weak Edge Cases:
- Don't assume perfect inputs
- Avoid brittle error handling
- Don't ignore scope boundary issues
- Don't skip validation steps
Generic Outputs:
- Don't create one-size-fits-all responses
- Avoid abstract recommendations
- Don't skip implementation details
- Don't ignore user context and constraints