Recipes Provider Fallback

Multi-Provider Fallback

OpenAI is down. Anthropic has a 5xx storm. Your AI feature stays up because you fail over to whichever provider is healthy.

ReqLLM’s uniform provider:model API makes this a 20-line pattern: walk a list of model specs, return on first success, log every failure so you know which provider flaked. Combine it with a circuit breaker for production hardening.

Real-world failure modes this catches

  • Provider full outage (rare but devastating — happens once or twice a year per provider).
  • Regional rate limit ceilings — most providers throttle per-org, not per-account.
  • Specific model deprecation. An old model spec still works on a different provider while you migrate.
  • Don't use this for cost optimization — it's overkill. Use the multi-provider recipe for static routing instead.