Provider Status

Real-time health monitoring for every AI provider.

Before you commit tokens to a model, you need to know if the API is healthy. AIWorkbench.dev provides a live status dashboard tracking uptime and response health across all supported providers.

Monitored Providers

ProviderEndpointStatus Source
Anthropic Claudeapi.anthropic.comOfficial status page + ping
OpenAI GPT-4oapi.openai.comOfficial status page + ping
Google Geminigenerativelanguage.googleapis.comGoogle Cloud status
AWS Bedrockbedrock-runtime.*.amazonaws.comAWS service health
DeepSeekapi.deepseek.comDirect API ping
Meta LlamaAPI provider dependentGateway health

What "Status" Actually Means

A green light does not guarantee your request succeeds. Status tracking measures three layers:

  1. HTTP Availability: Can we reach the endpoint? A 200 OK from the root path indicates the server is online.
  2. API Latency: Median TTFT over the last 5 minutes. Spikes above 3s trigger a yellow warning.
  3. Error Rate: Percentage of 429 (rate limit) and 5xx responses. Above 5% error rate triggers a red alert.

How to Use the Status Dashboard

The status page in AIWorkbench.dev shows a real-time grid of all providers. Each card displays:

  • Current latency in milliseconds
  • Uptime percentage over the last 24 hours
  • Last error timestamp and type
  • Region-specific health for multi-region providers like AWS Bedrock

Status-Driven Model Selection

When a provider is degraded, the workbench surfaces a warning banner. You can still use the model, but you should expect:

  • Longer TTFT (time to first token)
  • Higher rate-limit rejection rates
  • Possible incomplete responses on very long context windows

Key Takeaway

Provider status is a signal, not a guarantee. Use it to avoid sending critical requests to a degraded endpoint, but always implement client-side retry logic for production applications.