AI Skill Report Card
Auditing Websites Accessibility
Website Accessibility Auditing
Quick Start
Bash# Install common tools npm install -g axe-core pa11y lighthouse # Run basic audit lighthouse --only-categories=accessibility https://example.com axe --rules wcag2a,wcag2aa https://example.com
Recommendation▾
Add concrete remediation code snippets for common violations (e.g., proper alt text patterns, ARIA label examples)
Workflow
-
Automated Scan
- Run Lighthouse accessibility audit
- Use axe-core for WCAG compliance
- Check with pa11y for additional issues
-
Manual Screen Reader Testing
- Navigate with NVDA/JAWS/VoiceOver
- Test keyboard-only navigation (Tab, Enter, Space, Arrow keys)
- Verify focus indicators are visible
-
Content Review
- Check heading structure (H1-H6 hierarchy)
- Verify alt text for images
- Test form labels and error messages
-
Document Findings
- Categorize by severity (A, AA, AAA violations)
- Provide specific remediation steps
- Include screenshots/recordings
Progress:
- Automated scan complete
- Screen reader testing done
- Manual navigation verified
- Report generated
Recommendation▾
Include specific test cases with exact HTML input and expected screen reader output
Examples
Example 1: Input: E-commerce product page Output:
- Missing alt text on 5 product images
- Form has unlabeled search field
- Color contrast ratio 3.2:1 (needs 4.5:1)
- Heading jumps from H1 to H3
Example 2: Input: News article page Output:
- Good heading hierarchy
- All images have descriptive alt text
- Issue: Video lacks captions
- Skip link present but not functional
Recommendation▾
Add a severity matrix showing how to prioritize A vs AA vs AAA violations with business impact
Best Practices
- Test with actual screen readers, not just automated tools
- Use keyboard navigation for entire user journey
- Check color contrast in different lighting conditions
- Verify focus management in dynamic content
- Test with zoom up to 200%
- Include users with disabilities in testing when possible
Common Pitfalls
- Relying only on automated tools (catch ~30% of issues)
- Testing with mouse instead of keyboard
- Ignoring focus order and trap management
- Using placeholder text as labels
- Assuming ARIA fixes structural problems
- Not testing across different assistive technologies