AI Skill Report Card
Copying Website Layouts
YAML--- name: copying-website-layouts description: Creates exact replicas of websites from URLs for both mobile and desktop layouts. Use when you need to recreate a website's visual design, structure, and responsive behavior precisely. ---
Website Layout Copying
Quick Start15 / 15
Pythonimport requests from bs4 import BeautifulSoup import css_parser from urllib.parse import urljoin, urlparse import os def copy_website(url): # Fetch the page response = requests.get(url, headers={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36' }) soup = BeautifulSoup(response.content, 'html.parser') # Download all assets download_assets(soup, url) # Extract and inline critical CSS inline_styles(soup, url) # Add responsive viewport if missing if not soup.find('meta', attrs={'name': 'viewport'}): viewport = soup.new_tag('meta', **{'name': 'viewport', 'content': 'width=device-width, initial-scale=1'}) soup.head.append(viewport) return str(soup.prettify())
Recommendation▾
Add concrete input/output examples showing actual HTML/CSS snippets from real websites rather than generic placeholder examples
Workflow15 / 15
Progress:
- Fetch source HTML - Get the original page with proper headers
- Download all assets - CSS files, images, fonts, JS (in order of dependency)
- Extract computed styles - Capture inline styles and CSS rules
- Recreate responsive breakpoints - Identify and preserve media queries
- Handle dynamic content - Capture JavaScript-rendered elements
- Test mobile/desktop views - Verify layouts match at different screen sizes
- Optimize asset loading - Inline critical CSS, compress images
Detailed Steps:
- Asset Discovery & Download
Pythondef download_assets(soup, base_url): assets = { 'css': soup.find_all('link', rel='stylesheet'), 'images': soup.find_all('img'), 'scripts': soup.find_all('script', src=True), 'fonts': [] # Extract from CSS @font-face rules } for asset_type, elements in assets.items(): for element in elements: src = element.get('href') or element.get('src') if src: full_url = urljoin(base_url, src) local_path = download_file(full_url) element[src.split('=')[0]] = local_path
- Style Extraction
Pythondef extract_styles(soup, url): # Get all CSS files css_files = [urljoin(url, link.get('href')) for link in soup.find_all('link', rel='stylesheet')] # Combine all CSS combined_css = "" for css_url in css_files: response = requests.get(css_url) combined_css += response.text + "\n" # Parse and extract media queries parser = css_parser.CSSParser() stylesheet = parser.parseString(combined_css) return stylesheet
- Responsive Layout Capture
Pythondef capture_responsive_layouts(url): from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.add_argument('--headless') layouts = {} viewports = [ {'name': 'mobile', 'width': 375, 'height': 667}, {'name': 'tablet', 'width': 768, 'height': 1024}, {'name': 'desktop', 'width': 1920, 'height': 1080} ] driver = webdriver.Chrome(options=options) for viewport in viewports: driver.set_window_size(viewport['width'], viewport['height']) driver.get(url) layouts[viewport['name']] = { 'html': driver.page_source, 'computed_styles': driver.execute_script(""" return Array.from(document.querySelectorAll('*')).map(el => ({ selector: el.tagName.toLowerCase() + (el.id ? '#' + el.id : '') + (el.className ? '.' + el.className.split(' ').join('.') : ''), styles: window.getComputedStyle(el) })); """) } return layouts
Recommendation▾
Reduce verbosity in explanations - Claude understands CSS, responsive design, and web scraping basics without detailed commentary
Examples15 / 20
Example 1: E-commerce Product Page
Input: https://shop.example.com/product/123
Output:
HTML<!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <style> /* Extracted and inlined critical CSS */ .product-grid { display: grid; grid-template-columns: 1fr 1fr; gap: 2rem; } @media (max-width: 768px) { .product-grid { grid-template-columns: 1fr; gap: 1rem; } } </style> </head> <body> <!-- Exact HTML structure with local asset paths --> </body> </html>
Example 2: Landing Page with Hero Section
Input: https://company.com/landing
Output: Preserves exact spacing, typography, animations, and responsive behavior including:
- Hero section scaling
- Navigation collapse on mobile
- Image aspect ratios
- Font loading and fallbacks
Recommendation▾
Include a simple template or framework section that provides a ready-to-use class structure for common website patterns
Best Practices
- Use proper User-Agent headers to avoid bot blocking
- Download assets in dependency order: CSS first, then images, then scripts
- Preserve original file structure for relative path references
- Capture computed styles, not just declared CSS (handles inheritance)
- Test with actual devices/browsers, not just responsive dev tools
- Handle lazy-loaded content by scrolling and waiting for images
- Preserve CSS custom properties (CSS variables) for dynamic theming
- Include all font variants (weights, styles) used in the design
Common Pitfalls
- Missing viewport meta tag - Will break mobile layout completely
- Broken relative paths - Download and rewrite all asset URLs
- JavaScript-dependent layouts - Use headless browser to capture rendered state
- Missing @font-face rules - Extract from CSS files, not just HTML
- Ignoring :hover/:focus states - Capture all pseudo-class styles
- CORS-blocked assets - Some resources may need proxy or alternative source
- Dynamic content - Capture at multiple scroll positions if content loads on scroll
- Performance issues - Inline only critical CSS, defer non-essential assets