Run Your First Analysis

Test CrashLens with sample data and validate your setup

🧪 Testing and Data Generation

Generate Test Data for Development

Crashlens includes a powerful simulation feature for testing:

Basic Test Data Generation

BASH
# Generate normal usage patterns crashlens simulate --output normal-logs.jsonl --count 100 --scenario normal # Generate retry loop patterns (for testing retry detection) crashlens simulate --output retry-logs.jsonl --count 50 --scenario retry-loop # Generate model overkill patterns (expensive models for simple tasks) crashlens simulate --output overkill-logs.jsonl --count 30 --scenario model-overkill

Advanced Simulation Options

BASH
# Custom models and error rates crashlens simulate \ --output custom-logs.jsonl \ --count 200 \ --scenario mixed-errors \ --models "gpt-4o,gpt-4-turbo,claude-3" \ --error-rate 0.3 \ --seed 42 # Deterministic test data (same every time) crashlens simulate \ --output deterministic-test.jsonl \ --count 100 \ --seed 12345 \ --force

Scenario Types Available

normal

Balanced mix of successful and error traces

🔄
retry-loop

Multiple attempts with same prompts (tests retry detection)

💰
model-overkill

Expensive models for simple tasks (tests overkill detection)

⏱️
slow

Long response times >5000ms (tests timeout detection)

🎭
mixed-errors

Various error types and patterns

Test Your Policies Locally

Before pushing to GitHub Actions, test locally:

Step 1: Generate Test Data

BASH
# Generate test data crashlens simulate --output test-data.jsonl --count 100 --scenario retry-loop

Step 2: Test Policy Detection

BASH
# Test policy detection crashlens policy-check test-data.jsonl --policy-template retry-loop-prevention --fail-on-violations

Step 3: Test with Different Severity Levels

BASH
# Test with different severity levels crashlens policy-check test-data.jsonl --policy-template all --severity-threshold medium

Step 4: Generate Markdown Report

BASH
# Generate markdown report crashlens policy-check test-data.jsonl --policy-template all --format markdown > policy-report.md

Complete Testing Workflow

Here's a complete workflow to test all CrashLens features:

BASH
#!/bin/bash # Complete CrashLens Testing Script echo "🧪 Starting CrashLens Testing Workflow..." # 1. Generate different types of test data echo "📝 Generating test data..." crashlens simulate --output normal-test.jsonl --count 50 --scenario normal crashlens simulate --output retry-test.jsonl --count 30 --scenario retry-loop crashlens simulate --output overkill-test.jsonl --count 20 --scenario model-overkill # 2. Test individual policy templates echo "🔍 Testing individual policies..." crashlens policy-check retry-test.jsonl --policy-template retry-loop-prevention crashlens policy-check overkill-test.jsonl --policy-template model-overkill-detection # 3. Test all policies together echo "🎯 Testing all policies..." crashlens policy-check normal-test.jsonl --policy-template all --severity-threshold high # 4. Generate reports echo "📊 Generating reports..." mkdir -p reports crashlens policy-check retry-test.jsonl --policy-template all --format markdown > reports/retry-analysis.md crashlens policy-check overkill-test.jsonl --policy-template all --format markdown > reports/overkill-analysis.md # 5. Test with different configurations echo "⚙️ Testing configurations..." crashlens policy-check normal-test.jsonl --policy-template all --severity-threshold medium --fail-on-violations echo "✅ Testing complete! Check the reports/ directory for detailed analysis." echo "🚀 Ready to integrate with GitHub Actions!"

Tips for Effective Testing

💡

Start Small

Begin with 10-50 test traces to understand the output format and policy behavior.

🎯

Test Specific Scenarios

Use targeted scenarios to validate specific policy templates work as expected.

🔄

Use Deterministic Seeds

Use --seed parameter for reproducible test results in CI/CD environments.

📈

Validate Reports

Always check generated reports to ensure policies are detecting the expected patterns.

🚀 Next Steps

Once you've successfully tested CrashLens locally, you're ready to:

  • Integrate with your GitHub Actions workflow
  • Configure policies for your specific use case
  • Set up automated monitoring and alerting
  • Start analyzing your real LLM usage logs
Back to Documentation
Last updated: August 21, 2025