AI Skill Report Card

Cognitive Process Automation

A85·Apr 2, 2026·Source: Extension-selection
15 / 15
skill-creator "I need to validate startup ideas by researching market size, analyzing competitors, and evaluating founder-market fit"

This creates a structured skill that runs your exact validation methodology every time, maintaining quality and consistency without manual effort.

Recommendation
The Quick Start example could show actual skill creation syntax or output format rather than just the command
15 / 15

Phase 1: Process Identification

  • Identify a cognitive process you've done 10+ times manually
  • Document the current steps (even if informal)
  • Define what "good output" looks like for this process

Phase 2: Skill Creation

  • Use skill-creator to generate initial structure
  • Specify exact requirements and quality standards
  • Include negative instructions (what NOT to do)
  • Break complex processes into sequential phases

Phase 3: Iteration

  • Run the skill on a real case
  • Document gaps and improvements needed
  • Update instructions based on output quality
  • Repeat 3-4 times until output matches manual quality

Phase 4: Optimization

  • Add constraints and edge case handling
  • Refine phase transitions and data flow
  • Document the skill for team sharing
  • Publish if broadly applicable
Recommendation
Examples section could benefit from one concrete input/output pair showing exact prompts and resulting skill structure
17 / 20

Example 1: Code Review Skill Input: "Create a skill for conducting security-focused code reviews for our Python APIs" Output: Skill with phases for dependency analysis, authentication checks, input validation review, and vulnerability assessment - runs same standards every time

Example 2: Customer Research Skill Input: "Automate our pre-feature customer research process" Output: Skill that interviews users, analyzes usage data, checks competitor features, and produces structured recommendations before any new development

Example 3: Technical Writing Skill Input: "Standardize our API documentation process" Output: Skill that generates consistent docs with examples, error handling, authentication details, and testing instructions for every endpoint

Recommendation
Consider adding a brief section on measuring automation success (time saved, quality consistency metrics)

Start with proven processes: Only automate workflows you've mastered manually. The skill encodes YOUR judgment.

Be hyper-specific: Instead of "research competitors," specify "find 5-8 direct competitors, extract pricing tiers, analyze G2 reviews for complaints, flag recent funding rounds."

Use negative constraints: "Do not sugarcoat results," "Do not skip financial analysis," "Do not present estimates as facts."

Design sequential phases: Break complex processes into steps where each phase produces inputs for the next. Better depth than trying to do everything at once.

Plan for evolution: Skills improve through use. Expect 3-4 iterations before solid performance, 10+ iterations before exceeding manual quality.

Automating unfamiliar processes: Don't create skills for workflows you haven't done successfully multiple times manually.

Vague quality standards: "Do good research" produces mediocre output. Specify exactly what thorough looks like.

Monolithic design: Single-phase skills produce shallow analysis. Break into logical sequential steps.

Set-and-forget mentality: Skills need iteration. Plan to improve based on real usage, not perfect first versions.

Hoarding useful skills: If your process solves common problems, publish it. Team skills multiply organizational capability.

Skipping documentation: Undocumented skills become unmaintainable. Include context, constraints, and evolution notes.

0
Grade AAI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
15/15
Workflow
15/15
Examples
17/20
Completeness
20/20
Format
15/15
Conciseness
13/15