AI Skill Report Card

Engineering AI Workstations

A-82·Apr 14, 2026·Source: Web
15 / 15
Bash
# Essential system check and base setup sudo apt update && sudo apt install -y curl wget git build-essential htop nvidia-smi python3 --version && docker --version && nvidia-smi df -h && free -h && lscpu | grep "Model name"
Recommendation
Consider condensing the workflow checklists - some steps could be combined for better flow
14 / 15

System Assessment:

Progress:
- [ ] Check hardware specs (CPU, RAM, GPU, storage)
- [ ] Verify Ubuntu/Debian version and kernel
- [ ] Test GPU drivers and CUDA availability
- [ ] Assess current Python/Docker state

Base Infrastructure Setup:

Progress:
- [ ] Configure package management and repositories
- [ ] Install Docker with proper permissions
- [ ] Set up Python environment management
- [ ] Configure GPU acceleration stack

AI Runtime Deployment:

Progress:
- [ ] Install Ollama or preferred LLM runtime
- [ ] Test local model inference
- [ ] Set up model storage on appropriate drive
- [ ] Configure development environment (VS Code, Git)

Agent Infrastructure:

Progress:
- [ ] Deploy secure workspace containers
- [ ] Set up browser automation tools
- [ ] Configure agent permission boundaries
- [ ] Test end-to-end automation workflows
Recommendation
Add more concrete input/output examples for the 'Agent Infrastructure' section which is currently light on specifics
20 / 20

Example 1: GPU Driver Diagnosis Input: "nvidia-smi shows 'No devices were found'" Output:

Bash
# Check GPU detection lspci | grep -i nvidia # Reinstall driver sudo apt purge nvidia-* -y sudo apt install nvidia-driver-535 nvidia-utils-535 -y sudo reboot

Example 2: Docker Permission Fix Input: "Got permission denied while trying to connect to Docker daemon" Output:

Bash
sudo usermod -aG docker $USER newgrp docker docker run hello-world # Test immediately

Example 3: Ollama Local Setup Input: "Want to run Llama models locally" Output:

Bash
curl -fsSL https://ollama.ai/install.sh | sh ollama pull llama3.1:8b ollama run llama3.1:8b "Test response" # Move models to larger drive if needed: # sudo mkdir -p /mnt/storage/ollama && sudo chown $USER /mnt/storage/ollama # export OLLAMA_MODELS=/mnt/storage/ollama
Recommendation
Include expected execution times or performance benchmarks for key operations like model downloads

System Stability:

  • Always use apt over snap for development tools
  • Pin GPU driver versions to prevent breakage
  • Use Docker for isolating agent workspaces
  • Monitor disk space - models consume 4-70GB each

Performance Optimization:

  • Place models on fastest available drive (NVMe > SSD > HDD)
  • Configure swap for large model loading
  • Use systemd services for persistent model servers
  • Monitor GPU temperature and throttling

Security Boundaries:

  • Run agents in containers with limited filesystem access
  • Use non-root users for all agent processes
  • Restrict network access for coding agents
  • Never give agents sudo or system modification rights

Development Environment:

Bash
# VS Code with essential extensions code --install-extension ms-python.python code --install-extension ms-toolsai.jupyter code --install-extension ms-vscode-remote.remote-containers # Python environment isolation pipx install poetry pipx install uv # Faster pip alternative

Avoid These Mistakes:

  • Using conda when pip/pipx works fine (adds complexity)
  • Installing CUDA toolkit when driver suffices
  • Running agents with full filesystem access
  • Mixing system Python with virtual environments
  • Storing large models on small boot partition

Quick Diagnosis Commands:

Bash
# When things break, check these first: which python3 && python3 --version docker ps && docker images nvidia-smi && cat /proc/driver/nvidia/version systemctl --failed journalctl -xe | tail -20 df -h && free -h

Path and Permission Issues:

Bash
# Fix common PATH problems echo $PATH which -a python3 ls -la ~/.bashrc ~/.profile # Reset Docker permissions sudo systemctl restart docker groups $USER | grep docker

GPU Memory Management:

Bash
# Clear GPU memory when stuck sudo fuser -v /dev/nvidia* sudo systemctl restart nvidia-persistenced # For stuck model processes: pkill -f "ollama\|vllm\|llama"
0
Grade A-AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
15/15
Workflow
14/15
Examples
20/20
Completeness
20/20
Format
15/15
Conciseness
13/15