AI Skill Report Card

Generated Skill

B-70·Jan 24, 2026

Advanced Business Sensitivity Analysis

Python
import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.stats import norm, uniform, beta import seaborn as sns # Example: SaaS Business Sensitivity Analysis base_case = { 'monthly_revenue': 100000, 'churn_rate': 0.05, 'cac': 500, 'ltv_cac_ratio': 3.0, 'gross_margin': 0.80 } # Define variable ranges for tornado diagram sensitivity_ranges = { 'monthly_revenue': (0.8, 1.2), 'churn_rate': (0.8, 1.5), 'cac': (0.7, 1.3), 'gross_margin': (0.9, 1.1) }
Recommendation
Consider adding more specific examples

Progress:

  • Identify Value Drivers - Select 8-12 key variables impacting outcome
  • Create Tornado Diagram - Show individual variable impact ranges
  • Build Monte Carlo Model - Define probability distributions for each variable
  • Run Simulations - Execute 10,000+ iterations
  • Calculate Risk Metrics - VaR, CVaR, probability of loss
  • Generate Correlation Matrix - Identify variable interdependencies
  • Develop Scenarios - Weight different market conditions
  • Create Hedging Strategies - Recommend risk mitigation approaches

1. Identify Top Value Drivers

Python
def identify_value_drivers(base_model, target_metric): """One-at-a-time sensitivity test""" drivers = {} base_value = calculate_metric(base_model, target_metric) for var in base_model: # Test +/- 10% change high_case = base_model.copy() high_case[var] *= 1.1 high_impact = calculate_metric(high_case, target_metric) low_case = base_model.copy() low_case[var] *= 0.9 low_impact = calculate_metric(low_case, target_metric) drivers[var] = abs(high_impact - low_impact) / base_value return sorted(drivers.items(), key=lambda x: x[1], reverse=True)[:10]

2. Create Tornado Diagram

Python
def create_tornado_diagram(base_case, sensitivity_ranges, target_function): results = [] base_value = target_function(base_case) for var, (low_mult, high_mult) in sensitivity_ranges.items(): # Low case low_case = base_case.copy() low_case[var] *= low_mult low_value = target_function(low_case) # High case high_case = base_case.copy() high_case[var] *= high_mult high_value = target_function(high_case) results.append({ 'variable': var, 'low_impact': low_value - base_value, 'high_impact': high_value - base_value, 'range': abs(high_value - low_value) }) # Sort by range and plot results = sorted(results, key=lambda x: x['range'], reverse=True) plot_tornado(results, base_value) return results

3. Monte Carlo Simulation

Python
def run_monte_carlo(distributions, target_function, n_sims=10000): """ distributions = { 'revenue': ('normal', mean, std), 'costs': ('uniform', low, high), 'growth': ('beta', alpha, beta) } """ results = [] samples = {} # Generate samples for each variable for var, (dist_type, *params) in distributions.items(): if dist_type == 'normal': samples[var] = np.random.normal(params[0], params[1], n_sims) elif dist_type == 'uniform': samples[var] = np.random.uniform(params[0], params[1], n_sims) elif dist_type == 'beta': samples[var] = np.random.beta(params[0], params[1], n_sims) # Run simulations for i in range(n_sims): scenario = {var: samples[var][i] for var in samples} results.append(target_function(scenario)) return np.array(results)

4. Calculate Risk Metrics

Python
def calculate_risk_metrics(simulation_results, confidence_levels=[0.95, 0.99]): metrics = { 'mean': np.mean(simulation_results), 'std': np.std(simulation_results), 'min': np.min(simulation_results), 'max': np.max(simulation_results) } # Value at Risk (VaR) for conf in confidence_levels: var_level = np.percentile(simulation_results, (1-conf)*100) cvar_level = np.mean(simulation_results[simulation_results <= var_level]) metrics[f'VaR_{int(conf*100)}'] = var_level metrics[f'CVaR_{int(conf*100)}'] = cvar_level # Probability of loss (if applicable) if 'target_threshold' in locals(): prob_loss = np.mean(simulation_results < target_threshold) metrics['prob_loss'] = prob_loss return metrics

5. Correlation Analysis

Python
def analyze_correlations(samples_dict, simulation_results): # Create DataFrame with all variables and outcomes df = pd.DataFrame(samples_dict) df['outcome'] = simulation_results # Calculate correlation matrix corr_matrix = df.corr() # Plot heatmap plt.figure(figsize=(10, 8)) sns.heatmap(corr_matrix, annot=True, cmap='RdBu_r', center=0) plt.title('Variable Correlation Matrix') return corr_matrix
Recommendation
Include edge cases

Example 1: Real Estate Investment Input: Property valuation with rent, vacancy, expenses, cap rate variables Output:

  • Tornado shows rent has highest impact ($50K range)
  • Monte Carlo: 15% chance of negative returns
  • VaR(95%): -$25,000 annual loss
  • Hedge: Rent guarantee insurance recommended

Example 2: Product Launch Input: Market size, penetration, price, costs for new product Output:

  • Market size and penetration strongly correlated (0.7)
  • 30% probability of missing break-even
  • Scenario weighting: Bull(20%), Base(60%), Bear(20%)
  • Hedge: Staged rollout with go/no-go gates
  • Variable Selection: Focus on controllable vs. uncontrollable factors
  • Distribution Choice: Use historical data to inform probability distributions
  • Correlation Modeling: Account for realistic variable relationships
  • Scenario Weighting: Adjust probabilities based on market conditions
  • Validation: Back-test models against historical outcomes
  • Update Frequency: Refresh analysis quarterly or when conditions change
  • Using normal distributions for everything (consider skewness, bounded variables)
  • Ignoring variable correlations (creates unrealistic scenarios)
  • Over-engineering with too many variables (focus on top drivers)
  • Not validating distribution assumptions with data
  • Presenting results without actionable hedging recommendations
  • Failing to communicate uncertainty ranges to stakeholders
0
Grade B-AI Skill Framework
Scorecard
Criteria Breakdown
Quick Start
11/15
Workflow
11/15
Examples
15/20
Completeness
15/20
Format
11/15
Conciseness
11/15