Quick Start & Installation

Get CrashLens up and running in just 5 minutes

🚀 Quick Start (5 Minutes)

⚠️
Before Starting:CrashLens analyzes your LLM usage logs. Ensure you have *.jsonl log files in .llm_logs/ or logs/ directory, or use --simulate for testing.

Step 1: Install Crashlens

BASH
# Using pip pip install crashlens # Using poetry (recommended) poetry add crashlens # Verify installation crashlens --version # Should output: crashlens, version 2.5.1

Step 2: Initialize Configuration

BASH
# Interactive setup (recommended for first time) crashlens init # Or automated setup export CRASHLENS_TEMPLATES="retry-loop-prevention,model-overkill-detection" export CRASHLENS_SEVERITY="high" crashlens init --non-interactive

Step 3: Test with Sample Data

BASH
# Generate test data crashlens simulate --output test-logs.jsonl --count 50 --scenario retry-loop # Run policy check crashlens policy-check test-logs.jsonl --policy-template retry-loop-prevention

Video content will be added soon.

📋 Detailed Installation Guide

Prerequisites

  • Python 3.8+ (3.12 recommended)
  • Git repository with GitHub Actions enabled
  • Basic familiarity with YAML configuration files

📁 Log File Requirements

CrashLens requires LLM usage logs in JSONL format to analyze. Here's what you need:

Required Log Structure

Your logs should be in one of these locations:

  • .llm_logs/ directory (recommended - common for LangChain/LangFuse projects)
  • logs/ directory
  • Any directory with *.jsonl files

Log File Format

Each line should be a JSON object containing LLM API call data:

JSON
{"trace_id": "abc123", "model": "gpt-4", "usage": {"total_tokens": 1500}, "cost": 0.03, "timestamp": "2025-01-15T10:30:00Z"}

Getting Your Logs

From LangFuse:
BASH
# Export traces from LangFuse mkdir -p .llm_logs curl -X GET "https://cloud.langfuse.com/api/public/traces" \ -H "Authorization: Bearer YOUR_LANGFUSE_KEY" > .llm_logs/traces.jsonl
From OpenAI/Custom Logging:
PYTHON
# Example: Save API calls to .llm_logs/ import json import os def log_api_call(response, model, cost): os.makedirs('.llm_logs', exist_ok=True) log_entry = { "model": model, "usage": response.usage.dict(), "cost": cost, "timestamp": datetime.utcnow().isoformat() } with open('.llm_logs/api_calls.jsonl', 'a') as f: f.write(json.dumps(log_entry) + '\n')
No Logs Yet?

CrashLens can generate simulation data for testing:

BASH
mkdir -p .llm_logs crashlens --simulate --source local --count 50 > .llm_logs/demo.jsonl
💡
Important:Without real log files, CrashLens workflows will use simulation data. For meaningful analysis, ensure you have actual LLM usage logs in .llm_logs/ or logs/ directory.

Installation Methods

Method 1: Poetry (Recommended)

BASH
# Add to your project poetry add crashlens # Install in development environment poetry install # Verify installation poetry run crashlens --version

Method 2: pip

BASH
# Install globally pip install crashlens # Or in virtual environment python -m venv crashlens-env source crashlens-env/bin/activate # Linux/Mac # crashlens-env\Scripts\activate # Windows pip install crashlens

Method 3: From Source (Advanced)

BASH
# Clone repository git clone https://github.com/Crashlens/crashlens.git cd crashlens # Install with poetry poetry install # Or with pip pip install -e .
Back to Documentation
Last updated: August 21, 2025