Usage & Commands

Comprehensive guide to CrashLens usage patterns, workflows, and command reference

Quick Start Workflow

Daily Morning Check

Start your day by checking overnight usage patterns and any potential issues.

BASH
# Morning: Check overnight usage patterns crashlens analyze logs/ --since "yesterday 18:00" --until "today 09:00" \ --policy retry-loop-prevention,cost-spike-detection \ --alert-threshold medium \ --output overnight-report.json # Quick check for urgent issues crashlens watch --directory .llm_logs/ --alert-on policy-violation

CI/CD Integration

Add CrashLens to your GitHub Actions workflow to catch cost violations early.

YAML
# .github/workflows/crashlens-check.yml name: CrashLens Cost Check on: pull_request: branches: [main] push: branches: [main] jobs: cost-check: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install CrashLens run: pip install crashlens - name: Run Cost Analysis run: | crashlens policy-check logs/ \ --policy cost-threshold-enforcement \ --max-daily-cost 100 \ --fail-on-violation

Weekly Team Reporting

Generate comprehensive weekly reports for team review and optimization planning.

BASH
# Weekly team report generation crashlens report --template executive \ --period last-7-days \ --include-charts \ --team-metrics \ --include-recommendations \ --output weekly-team-report.html \ --format html \ --email team@company.com # Extract actionable insights crashlens optimize --focus all \ --savings-threshold 50 \ --implementation-effort low,medium \ --output-format json | jq '.recommendations'

Performance Optimization

Large Log Files

For processing large log files efficiently, use streaming mode and disk caching.

BASH
# For large log files (>100MB), use streaming mode crashlens analyze --stream --memory-limit 512MB --use-disk-cache logs/

Incremental Analysis

Analyze only new logs since the last run for faster processing.

BASH
# Only analyze new logs since last run crashlens analyze --resume --checkpoint .crashlens-checkpoint.json

Advanced Use Cases

Real-time Cost Monitoring

Set up real-time monitoring with custom alerting thresholds.

BASH
# Monitor costs and alert on spikes crashlens watch .llm_logs/ \ --alert-threshold 50 \ --time-window 1h \ --webhook https://hooks.slack.com/webhook \ --email-alerts admin@company.com # Identify cost spikes in the last 24 hours crashlens query --filter "timestamp >= 'today - 24h' AND cost > 10" \ --group-by model,hour \ --aggregate sum(cost),avg(cost),count(*) \ --order-by sum_cost desc \ --limit 20 crashlens optimize --focus cost --output recommendations.md

Retry Loop Detection

Detect and prevent expensive retry loops before they impact your budget.

BASH
# Detect potential retry loops crashlens policy-check --policy retry-loop-prevention \ --time-window 300 \ --alert-threshold 3 # Alert after 3 retries in 5 minutes

Model Usage Optimization

Analyze model usage patterns and get recommendations for more efficient alternatives.

BASH
# Compare performance across models crashlens query --filter "model IN ('gpt-4', 'gpt-3.5-turbo', 'claude-3')" \ --group-by model \ --aggregate avg(cost),avg(response_time),count(*) \ --usage-pattern high-volume --output model-recommendations.json

Command Reference

crashlens analyze

Analyze LLM usage logs for patterns, costs, and optimization opportunities.

BASH
crashlens analyze [OPTIONS] [LOG_FILES...] Options: --policy POLICIES Policies to apply: retry-loop-prevention, cost-spike-detection --time-range RANGE Time range: "last-24h", "2025-01-01..2025-01-31" --output FILE Output file path --format FORMAT Output format: json, csv, html, markdown --parallel Enable parallel processing --memory-limit SIZE Memory limit: 512MB, 1GB, 2GB --stream Stream processing for large files --use-disk-cache Use disk cache for large datasets --resume Resume from checkpoint --checkpoint FILE Checkpoint file path Examples: crashlens analyze logs/ crashlens analyze --policy retry-loop-prevention --output report.json crashlens analyze --stream --parallel logs/*.jsonl

crashlens policy-check

Check logs against defined policies and compliance rules.

BASH
crashlens policy-check [OPTIONS] [LOG_FILES...] Options: --policy POLICY_NAME Policy to check: retry-loop-prevention, cost-threshold --config CONFIG_FILE Custom policy configuration file --severity LEVEL Minimum severity: low, medium, high --fail-on-violation Exit with error code on policy violations --detailed-output Include detailed violation information --fix-suggestions Include automated fix suggestions --max-daily-cost AMOUNT Maximum daily cost threshold --max-request-cost AMOUNT Maximum per-request cost threshold Examples: crashlens policy-check --policy cost-threshold --max-daily-cost 500 crashlens policy-check --detailed-output --fix-suggestions

crashlens init

Initialize CrashLens in your project with default configuration.

BASH
crashlens init [OPTIONS] Options: --project-type TYPE Project type: web-app, api, ml-pipeline, chatbot --log-directory DIR Directory to monitor for logs --config-format FORMAT Configuration format: yaml, json, toml --policies POLICIES Default policies to enable --interactive Interactive setup (default) --non-interactive Non-interactive setup with defaults --template TEMPLATE Use predefined template: basic, advanced, enterprise --github-actions Generate GitHub Actions workflow --docker Generate Docker configuration Examples: crashlens init --project-type web-app --log-directory logs/ crashlens init --template enterprise --github-actions crashlens init --non-interactive

crashlens simulate

Generate simulated LLM usage data for testing and demos.

BASH
crashlens simulate [OPTIONS] Options: --scenario SCENARIO Simulation scenario: normal, high-cost, retry-loops, mixed-errors --duration DURATION Simulation duration: 1h, 1d, 1w --request-rate RATE Requests per minute: 10, 50, 100 --models MODELS Models to simulate: gpt-4, gpt-3.5-turbo, claude-3 --cost-distribution DIST Cost distribution: uniform, normal, pareto --output FILE Output file for simulated logs --format FORMAT Output format: jsonl, csv, json --count COUNT Number of requests to simulate --realistic Use realistic timing and patterns Examples: crashlens simulate --scenario high-cost --duration 1d --output test-logs.jsonl crashlens simulate --scenario retry-loops --count 1000 --realistic crashlens simulate --scenario stress-test --count 10000 --output load-test.jsonl

crashlens watch

Monitor directories for new logs and analyze them in real-time.

BASH
crashlens watch [OPTIONS] DIRECTORY Options: --policy POLICIES Policies to check: retry-loop-prevention, cost-spike --alert-threshold AMOUNT Cost threshold for alerts --time-window WINDOW Time window for analysis: 5m, 1h, 1d --webhook URL Webhook URL for alerts --email-alerts EMAIL Email address for alerts --slack-webhook URL Slack webhook for notifications --polling-interval SECONDS Polling interval in seconds --quiet Suppress non-alert output --daemon Run as background daemon Examples: crashlens watch .llm_logs/ --alert-threshold 100 --time-window 1h crashlens watch logs/ --webhook https://example.com/webhook --polling-interval 30 crashlens watch .llm_logs/ --slack-webhook https://hooks.slack.com/... --quiet

crashlens query

Query and filter log data with SQL-like expressions.

BASH
crashlens query [OPTIONS] [LOG_FILES...] Options: --filter EXPRESSION Filter expression (e.g., "cost > 10 AND model = 'gpt-4'") --select FIELDS Select specific fields (comma-separated) --group-by FIELDS Group by fields --order-by FIELD Order by field --limit N Limit results --format FORMAT Output format: json, csv, table --aggregate FUNCTIONS Aggregate functions: sum, avg, count, min, max --time-window WINDOW Group by time window: 1h, 1d, 1w Filter Operators: =, !=, <, >, <=, >= Comparison operators AND, OR, NOT Logical operators IN, NOT IN List membership LIKE, NOT LIKE Pattern matching IS NULL, IS NOT NULL Null checks Examples: crashlens query --filter "cost > 5" --select model,cost,timestamp crashlens query --filter "model IN ('gpt-4', 'claude-3')" --group-by model --aggregate sum(cost) crashlens query --filter "timestamp >= '2025-01-20'" --order-by cost --limit 10 crashlens query --time-window 1h --aggregate avg(cost),count(*) --format csv

crashlens report

Generate comprehensive usage reports.

BASH
crashlens report [OPTIONS] [LOG_FILES...] Options: --template TEMPLATE Report template: - executive: High-level executive summary - technical: Detailed technical analysis - cost: Cost analysis and optimization - security: Security and compliance review - performance: Performance analysis --period PERIOD Time period for report --output FILE Output file --format FORMAT Report format: html, pdf, markdown, json --include-charts Include visual charts and graphs --team-metrics Include team-level metrics --compare-previous Compare with previous period --include-trends Include trend analysis --include-recommendations Include optimization recommendations --custom-sections SECTIONS Custom report sections Examples: crashlens report --template executive --period last-30-days --output monthly-report.html crashlens report --template cost --include-charts --compare-previous crashlens report --template technical --team-metrics --include-trends crashlens report --format pdf --include-recommendations --output quarterly-review.pdf

crashlens optimize

Generate optimization recommendations.

BASH
crashlens optimize [OPTIONS] [LOG_FILES...] Options: --focus AREA Optimization focus: - cost: Cost reduction strategies - performance: Performance improvements - reliability: Reliability enhancements - security: Security improvements - all: Comprehensive optimization --savings-threshold AMOUNT Minimum savings threshold to report --implementation-effort LEVEL Effort level: low, medium, high, all --format FORMAT Output format: markdown, json, html --include-examples Include implementation examples --priority LEVEL Priority level: high, medium, low, all --confidence LEVEL Confidence level for recommendations Examples: crashlens optimize --focus cost --savings-threshold 100 --format markdown crashlens optimize --focus performance --implementation-effort low crashlens optimize --focus all --include-examples --output optimization-plan.html

Global Options

BASH
Global options available for all commands: --version Show version information --help Show help message --config FILE Use custom configuration file --verbose, -v Verbose output --quiet, -q Suppress non-essential output --no-color Disable colored output --log-level LEVEL Set logging level: debug, info, warning, error --working-directory DIR Set working directory --simulate Use simulation data if no logs found --dry-run Show what would be done without executing --force Force operation, ignore warnings --backup Create backup before modifying files Environment Variables: CRASHLENS_CONFIG Default configuration file path CRASHLENS_LOG_LEVEL Default log level CRASHLENS_NO_COLOR Disable colored output CRASHLENS_WORKING_DIR Default working directory Examples: crashlens --version crashlens analyze --verbose --config custom-config.yaml crashlens policy-check --quiet --no-color --log-level error crashlens --simulate analyze --dry-run

Exit Codes

CodeMeaningDescription
0SuccessCommand completed successfully
1General ErrorCommand failed due to general error
2Policy ViolationPolicy violations detected (when using --fail-on-violation)
3Configuration ErrorInvalid configuration or missing required settings
4File ErrorFile not found, permission denied, or I/O error

Configuration Examples

Basic Configuration (crashlens.yaml)

YAML
# crashlens.yaml log_directories: - .llm_logs - logs default_policies: - retry-loop-prevention - model-overkill-detection severity_levels: retry_loops: high cost_spikes: medium performance_issues: low output: format: json include_recommendations: true alerts: email: team@company.com slack_webhook: https://hooks.slack.com/... thresholds: daily_cost_limit: 500 retry_limit: 3 response_time_limit: 30

Advanced Policy Configuration

YAML
# custom-policies.yaml policies: retry_loop_prevention: enabled: true severity: high conditions: - retry_count > 3 - time_window < 300 # 5 minutes action: alert cost_threshold_enforcement: enabled: true severity: medium conditions: - daily_cost > 1000 - single_request_cost > 50 action: block model_efficiency_check: enabled: true severity: low conditions: - model == "gpt-4" AND token_count < 100 suggestion: "Consider using gpt-3.5-turbo for short requests" custom_rules: - name: "weekend_usage_alert" condition: "timestamp.weekday >= 5 AND cost > 100" severity: medium message: "High weekend usage detected"
Back to Documentation
Last updated: August 24, 2025