Introduction
Understanding Crashlens and its capabilities
Welcome to CrashLens! 🚀
As the complete open-source platform for LLM cost monitoring and policy enforcement, CrashLens is a production-grade CLI tool designed to help organizations monitor, control, and optimize their AI/LLM usage across all major platforms. Our mission is to transform LLM log quality from 'hope for the best' to 'guarantee compliance', ensuring you can manage your AI investments with confidence.
The rapid adoption of Generative AI and Large Language Models (LLMs) has introduced new financial complexities, where costs are primarily driven by token volume rather than traditional compute metrics. This can lead to unexpected "bill shock". CrashLens helps you proactively manage these costs.
🎯 What CrashLens Does
Detects Token Waste
Automatically identifies inefficient usage patterns such as retry loops, model overkill (using expensive models for simple tasks), and other inefficiencies.
Policy Enforcement
Apply YAML-based rules to set limits on cost, token usage, model selection, and more, directly enforcing your team's policies.
CI/CD Integration
Block bad logs and costly AI usage patterns from reaching production by integrating automated checks into your development pipelines.
Data Quality Assurance
Validate your LLM logs against defined schema contracts to ensure consistency, reliability, and proper cost tracking fields.
Actionable Insights
Get clear error messages with line numbers and field names, along with concrete suggestions for optimizing your LLM usage.
Safe Testing
Use our simulation mode (--simulate) to test new policies and see their potential impact without making changes or incurring costs.
🌟 Key Features
Multi-Source Support
Supports Langfuse, Helicone, OpenAI APIs, and local JSONL files
Local & Secure
Operates entirely locally, ensuring your data remains private and secure
Production Ready
Faster debugging, 100% reliable cost tracking, eliminates production outages