Tools Guide
Complete documentation for AIWorkbench.dev tools. Learn how to use the workbench, compare models, and optimize your workflow.
API Workbench
Master the core workbench. Connect Anthropic, OpenAI, and Gemini with granular control over thermal parameters and extended thinking.
Multi-Model Compare
Learn how to benchmark output quality, latency, and token efficiency across different providers using a single prompt.
Token Counter
Understanding tokenization engines across different providers and managing your context window effectively.
Cost Calculator
Learn how to estimate and optimize your monthly API spend across major LLM providers.
Prompt Library
Version control, library organization, and optimizing your system instructions for production.
Prompt Optimizer
Scientific prompt engineering techniques including Chain-of-Thought, Few-Shot, and Persona Framing.
Caching Guide
Optimize your token usage with prompt caching. Save up to 90% on API costs for repetitive context loads.
Streaming Debugger
Inspect raw SSE chunks, parse metadata, and diagnose latency or truncation issues in real-time LLM streams.
Provider Status
Real-time health monitoring for every AI provider. Check latency, uptime, and error rates before committing tokens.
Model Catalog
Complete specifications for every model: context windows, pricing, features, and availability across providers.
BYOK Security
Understand the security architecture behind our Bring-Your-Own-Key model. Your keys never leave your browser.
Privacy Policy
A detailed breakdown of how we handle your data and why our browser-only model is the gold standard for privacy.