Confidence Model
Progressive validation through conversation, PM data, and code analysis
3-Stage Progressive Validation
Stage 1: AI Diagnosis
AI-guided conversation (12-15 questions) about development practices, velocity trends, and team dynamics. Identifies patterns and estimates hidden costs.
Data Sources:
- Self-reported metrics
- Pattern recognition
- Qualitative assessments
Outputs:
- Preliminary cost estimates
- Red flag identification
- Recommended next steps
Stage 2: PM Analysis
Extract metrics from GitHub Issues. Validates AI diagnosis findings with objective sprint data (velocity, spillover, rework, time-to-ship).
Data Sources:
- Sprint velocity trends
- Ticket completion rates
- Bug creation/resolution
- Sprint spillover frequency
Validation:
- Triangulates with Stage 1
- Quantifies estimates
- Identifies discrepancies
Stage 3: Code Analysis
Analyze code repository to measure churn, hotspots, and technical debt. Three-source validation (conversation + PM data + code) achieves highest confidence.
Data Sources:
- Git commit history
- Code churn analysis
- File hotspot detection
- Complexity metrics
Final Output:
- Definitive cost calculations
- Three-source validation
- Detailed recommendations
Confidence Tiers
Low confidence - Need more data points or validation
Moderate confidence - Preliminary diagnosis, directional guidance
High confidence - Actionable findings (Stage 1 completed)
Very high confidence - Validated with repository data (Stage 3 completed)
Example Calculation
Stage 1 (AI Diagnosis):
Self-reported data (0.5) × Strong patterns (0.9) × 8 data points (0.8)
= 0.5 × 0.9 × 0.8 = 36% confidence
Stage 2 (+ PM Analysis):
Observed metrics (0.7) × Strong patterns (0.9) × 15 data points (1.0)
= 0.7 × 0.9 × 1.0 = 63% confidence
Stage 3 (+ Code Analysis):
Full repo analysis (0.9) × Strong patterns (0.9) × 25+ data points (1.0)
= 0.9 × 0.9 × 1.0 = 81% confidence
Each stage builds on the previous, progressively increasing confidence through multiple independent data sources.