Provider Status
Real-time health monitoring for every AI provider.
Before you commit tokens to a model, you need to know if the API is healthy. AIWorkbench.dev provides a live status dashboard tracking uptime and response health across all supported providers.
Monitored Providers
| Provider | Endpoint | Status Source |
|---|---|---|
| Anthropic Claude | api.anthropic.com | Official status page + ping |
| OpenAI GPT-4o | api.openai.com | Official status page + ping |
| Google Gemini | generativelanguage.googleapis.com | Google Cloud status |
| AWS Bedrock | bedrock-runtime.*.amazonaws.com | AWS service health |
| DeepSeek | api.deepseek.com | Direct API ping |
| Meta Llama | API provider dependent | Gateway health |
What "Status" Actually Means
A green light does not guarantee your request succeeds. Status tracking measures three layers:
- HTTP Availability: Can we reach the endpoint? A 200 OK from the root path indicates the server is online.
- API Latency: Median TTFT over the last 5 minutes. Spikes above 3s trigger a yellow warning.
- Error Rate: Percentage of 429 (rate limit) and 5xx responses. Above 5% error rate triggers a red alert.
How to Use the Status Dashboard
The status page in AIWorkbench.dev shows a real-time grid of all providers. Each card displays:
- Current latency in milliseconds
- Uptime percentage over the last 24 hours
- Last error timestamp and type
- Region-specific health for multi-region providers like AWS Bedrock
Status-Driven Model Selection
When a provider is degraded, the workbench surfaces a warning banner. You can still use the model, but you should expect:
- Longer TTFT (time to first token)
- Higher rate-limit rejection rates
- Possible incomplete responses on very long context windows
Key Takeaway
Provider status is a signal, not a guarantee. Use it to avoid sending critical requests to a degraded endpoint, but always implement client-side retry logic for production applications.