[09]Guides

Tools Guide

Complete documentation for AIWorkbench.dev tools. Learn how to use the workbench, compare models, and optimize your workflow.

Core Tools

API Workbench

Master the core workbench. Connect Anthropic, OpenAI, and Gemini with granular control over thermal parameters and extended thinking.

Read guide
Core Tools

Multi-Model Compare

Learn how to benchmark output quality, latency, and token efficiency across different providers using a single prompt.

Read guide
Core Tools

Token Counter

Understanding tokenization engines across different providers and managing your context window effectively.

Read guide
Core Tools

Cost Calculator

Learn how to estimate and optimize your monthly API spend across major LLM providers.

Read guide
Prompt Engineering

Prompt Library

Version control, library organization, and optimizing your system instructions for production.

Read guide
Prompt Engineering

Prompt Optimizer

Scientific prompt engineering techniques including Chain-of-Thought, Few-Shot, and Persona Framing.

Read guide
Prompt Engineering

Caching Guide

Optimize your token usage with prompt caching. Save up to 90% on API costs for repetitive context loads.

Read guide
Prompt Engineering

Streaming Debugger

Inspect raw SSE chunks, parse metadata, and diagnose latency or truncation issues in real-time LLM streams.

Read guide
Reference

Provider Status

Real-time health monitoring for every AI provider. Check latency, uptime, and error rates before committing tokens.

Read guide
Reference

Model Catalog

Complete specifications for every model: context windows, pricing, features, and availability across providers.

Read guide
Security

BYOK Security

Understand the security architecture behind our Bring-Your-Own-Key model. Your keys never leave your browser.

Read guide
Security

Privacy Policy

A detailed breakdown of how we handle your data and why our browser-only model is the gold standard for privacy.

Read guide